<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Slade</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Slade"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Slade"/>
	<updated>2026-04-22T14:07:04Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6571</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6571"/>
		<updated>2010-12-02T23:45:31Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*contribution (conclusion mostly): just need to re-work some sections and follow the cues I left in the Conclusion section. &lt;br /&gt;
*critique (conclusion mostly): critique the conclusion of the essay&lt;br /&gt;
*style: the style section is largely untouched. Daniel and I([[Rannath]]) have puts some thoughts there, but that section needs to be made into sentences.&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====One paragraph====&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the Linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux&#039;s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the Linux community has improved Linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to change how the applications are programmed rather than make changes to the Linux kernel in order to make scalability improvements. This may indicate that what has come before is done quite well when considering that the Linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
====The other====&lt;br /&gt;
The contribution of this paper is a lot of research that has focus upon techniques and methods for scalability. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some beneficial aspects. Some of the sections listed provide insight into the improvements that can be reaped from these optimizations.&lt;br /&gt;
&lt;br /&gt;
*Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
**Or you could change your paragraphs to be more inline with everyone else&#039;s work.&lt;br /&gt;
**Or since I still don&#039;t know what you want to can put what needs to change where it needs to change as a cue.&lt;br /&gt;
*** I want to write contribution in the sense that it answers the questions posed. How does this contribute and does it improve upon previous iterations. The idea is that the research contributes, the sections back it up and the the previous iterations actually show merit as effective techniques used for scalability.&lt;br /&gt;
****Each and every one removes a bottleneck, that&#039;s how they improve it, That&#039;s right at the top after that I just listed the bottlenecks removed.&lt;br /&gt;
***If you want to state it in a matter of fact, the contribution is in my opinion the research itself which shows that rather than going in and fixing all the bottlenecks discovered that the research showed that application programming can be greatly optimized instead of the kernel. That&#039;s the contribution it added, not that they went in and fixed all the bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====What needs to happen====&lt;br /&gt;
*Rovic your bits was formed as a conclusion, built from the essay, not a grounds to build our arguments on. So I moved it here from the header. Your paragraph is right, you just had it in the wrong place. -[[Rannath]]&lt;br /&gt;
*Both the above sections need to be joined in such a way as to eliminate repeated information.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6555</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6555"/>
		<updated>2010-12-02T22:55:27Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*contribution (conclusion mostly): just need to re-work some sections and follow the cues I left in the Conclusion section. &lt;br /&gt;
*critique (conclusion mostly): critique the conclusion of the essay&lt;br /&gt;
*style: the style section is largely untouched. Daniel and I([[Rannath]]) have puts some thoughts there, but that section needs to be made into sentences.&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====One paragraph====&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the Linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux&#039;s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the Linux community has improved Linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to change how the applications are programmed rather than make changes to the Linux kernel in order to make scalability improvements. This may indicate that what has come before is done quite well when considering that the Linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
====The other====&lt;br /&gt;
The contribution of this paper is a lot of research that has focus upon techniques and methods for scalability. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some beneficial aspects. Some of the sections listed provide insight into the improvements that can be reaped from these optimizations.&lt;br /&gt;
&lt;br /&gt;
*Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
**Or you could change your paragraphs to be more inline with everyone else&#039;s work.&lt;br /&gt;
**Or since I still don&#039;t know what you want to can put what needs to change where it needs to change as a cue.&lt;br /&gt;
*** I want to write contribution in the sense that it answers the questions posed. How does this contribute and does it improve upon previous iterations. The idea is that the research contributes, the sections back it up and the the previous iterations actually show merit as effective techniques used for scalability.&lt;br /&gt;
&lt;br /&gt;
====What needs to happen====&lt;br /&gt;
*Rovic your bits was formed as a conclusion, built from the essay, not a grounds to build our arguments on. So I moved it here from the header. Your paragraph is right, you just had it in the wrong place. -[[Rannath]]&lt;br /&gt;
*Both the above sections need to be joined in such a way as to eliminate repeated information.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6510</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6510"/>
		<updated>2010-12-02T20:54:57Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects. Some of the sections listed provide insight into the improvements that can be reaped from these optimizations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. This can indicate that what has come before is done quite well when considering that the linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6509</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6509"/>
		<updated>2010-12-02T20:54:09Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. This can indicate that what has come before is done quite well when considering that the linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6508</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6508"/>
		<updated>2010-12-02T20:53:57Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* What hinders scalability: Section 4.1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. This can indicate that what has come before is done quite well when considering that the linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6507</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6507"/>
		<updated>2010-12-02T20:53:12Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Rovic P. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects. &lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. This can indicate that what has come before is done quite well when considering that the linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-Hey guys, if anyone can see this, if you have a chance, expand on the sections from 4.2 to 4.7 to focus on the contribution analysis I&#039;ve written. I&#039;m a bit sleep deprived to proof anything I write. I will have more of a chance to add to it after 8pm today.&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6506</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6506"/>
		<updated>2010-12-02T20:49:33Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Avoiding unnecessary locking: Section 4.7 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects. &lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. This can indicate that what has come before is done quite well when considering that the linux kernel optimizations showed more improvement when put in conjunction with the application improvements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6505</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6505"/>
		<updated>2010-12-02T20:47:29Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects. &lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==============================================================================================================&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6503</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6503"/>
		<updated>2010-12-02T20:46:51Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper - DONE!!!==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
The paper: [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts - DONE!!!==&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem - DONE!!!==&lt;br /&gt;
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.&lt;br /&gt;
&lt;br /&gt;
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.&amp;lt;sup&amp;gt;[[#Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines&amp;lt;sup&amp;gt;[[#Foot2|2]]&amp;lt;/sup&amp;gt; and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.&amp;lt;sup&amp;gt;[[#Foot3|3]]&amp;lt;/sup&amp;gt; There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.&amp;lt;sup&amp;gt;[[#Foot4|4]]&amp;lt;/sup&amp;gt; In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
The research contribution of this paper is a lot of research that has focus upon techniques and methods which focus on scalability improvements. This is accomplished through programming of applications alongside kernel programming. This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. In looking at the issue of scalability it is important to note the causes of the factors which hinder scalability.&lt;br /&gt;
&lt;br /&gt;
===What hinders scalability: &#039;&#039;Section 4.1&#039;&#039;===&lt;br /&gt;
*The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl&#039;s Law&lt;br /&gt;
** Amdahl&#039;s Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --&amp;gt; limit of 4x speedup) (I can&#039;t get this to sound right someone fix it please -[[Rannath]] &amp;lt;- I will fix [[Daniel B.]]&lt;br /&gt;
*Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
**Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
**Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
**Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
**Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
**Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
It has been shown that simple scaling techniques can be effective in increasing scalability. The authors looked at three different approaches to removing the bottlenecks within the system. The first was to see if there were issues within the linux kernel application, the second was to identify issues with the application design and the third was to address how the application interacts with the linux kernel services. Through this approach, the authors were able to quickly identify problems such as bottlenecks and apply simple techniques in fixing the issues at hand to reap some benefitial aspects. &lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.&lt;br /&gt;
&lt;br /&gt;
In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel&#039;s implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core&#039;s queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.&lt;br /&gt;
&lt;br /&gt;
[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
Through this research on various techniques as listed above, it was determined by the authors that the linux kernel itself has many incorporated techniques used to improve scalability. The authors actually go on to speculate that &amp;quot;perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.&amp;quot; This tends to show that the work in the linux community has improved linux a large amount and is current with modern techniques for optimization. It could also be interpreted from the paper that it may be to the benefit of the community to rather change how the applications are programmed rather than make changes to the linux kernel in order to make scalability improvements. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
**************************************************************************************************************&lt;br /&gt;
**************************************************************************************************************&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;===&lt;br /&gt;
====Work in Progress====&lt;br /&gt;
&lt;br /&gt;
=====[[Rovic P.]]=====&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====[[Rannath]]=====&lt;br /&gt;
Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.&lt;br /&gt;
&lt;br /&gt;
 Now you just have to fill in how fair the rest of the paper is.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
Some acronyms aren&#039;t explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.&lt;br /&gt;
&lt;br /&gt;
Your example has no impact on the paper, it was in the &amp;quot;look here for more info&amp;quot; section. Most people wouldn&#039;t know what a &amp;quot;translation look-aside buffer&amp;quot; is either.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[3] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;the paper itself doesn&#039;t need to be referenced more than once as this is a critique of the paper...&#039;&#039;&#039;&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6390</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6390"/>
		<updated>2010-12-02T16:31:33Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* memchached: Section 3.2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*flesh out the whole lot&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Essay Conclusion (also discussion) - Everyone&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
==Research problem==&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
===Section 4.1 problems:===&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).	 &lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:	 &lt;br /&gt;
***Locking of shared data structure as the number of cores increase leads to an increase in lock wait time	 &lt;br /&gt;
***Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol	 &lt;br /&gt;
***Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate	 &lt;br /&gt;
***Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources	 &lt;br /&gt;
***Not enough tasks for cores leads to idle cores&lt;br /&gt;
&lt;br /&gt;
===Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
===Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;===&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
===Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;===&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
===Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;===&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
===Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;===&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
===Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;===&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
==Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;==&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
 Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
===Content(Fairness): &#039;&#039;Section 5&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
====memcached: &#039;&#039;Section 5.3&#039;&#039;====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 5.4&#039;&#039;====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039; - [[Rannath]]&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 5.6&#039;&#039;====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Fair conclusion?&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
Everything seems to be in logical order. I couldn&#039;t find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
==Deprecated==&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6277</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6277"/>
		<updated>2010-12-02T14:12:20Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want.&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, you just have to run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time.&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion====&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs. In spite of this, there are also some assumptions or conditions that the paper fails to provide a fair explanation as to the inclusion/exclusion of. All tests ignore the storage IO bottleneck, which while not entirely relevant for the purposes of the paper, is relevant to real-world use. It is a problem, but not nearly as bad once you consider SDD technology, which goes a long way to reducing the storage IO bottleneck.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6271</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6271"/>
		<updated>2010-12-02T13:58:27Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want.&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time.&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion====&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs. In spite of this, there are also some assumptions or conditions that the paper fails to provide a fair explanation as to the inclusion/exclusion of. All tests ignore the storage IO bottleneck, which while not entirely relevant for the purposes of the paper, is relevant to real-world use. It is a problem, but not nearly as bad once you consider SDD technology, which goes a long way to reducing the storage IO bottleneck.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also explicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6266</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6266"/>
		<updated>2010-12-02T13:42:15Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Class and Notices */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want.&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time.&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to be an inverse relationship (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion====&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs. In spite of this, there are also some assumptions or conditions that the paper fails to provide a fair explanation as to the inclusion/exclusion of. All tests ignore the storage IO bottleneck, which while not entirely relevant for the purposes of the paper, is relevant to real-world use. It is a problem, but not nearly as bad once you consider SDD technology, which goes a long way to reducing the storage IO bottleneck.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also explicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6020</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6020"/>
		<updated>2010-12-02T00:12:20Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Claim Sections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 630pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
 Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
 Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counter by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=5463</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=5463"/>
		<updated>2010-11-23T12:46:07Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Group members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
 The paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
Authors in order presented: Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich&lt;br /&gt;
&lt;br /&gt;
affiliation: MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
Ideas to explain:&lt;br /&gt;
#thread (maybe)&lt;br /&gt;
#Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
#Summarize scalability tutorial (Section 4.1 of the paper)&lt;br /&gt;
#Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 3.1&#039;&#039;=====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
=====memchached: &#039;&#039;Section 3.2&#039;&#039;=====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
&lt;br /&gt;
Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
=====Per-Core Data Structures=====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
=====Eliminating false sharing=====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
=====Avoiding unnecessary locking=====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
Fairness criterion:&lt;br /&gt;
#does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
#does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
#does the paper present information out of order?&lt;br /&gt;
#does the paper present needless information?&lt;br /&gt;
#does the paper have any sections that are inherently confusing?&lt;br /&gt;
&lt;br /&gt;
=====Testing Method: &#039;&#039;Section 5&#039;&#039;=====&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore internal fairness is preserved for all seven programs.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3878</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3878"/>
		<updated>2010-10-14T17:48:31Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
==== ====&lt;br /&gt;
(Not complete but most of article 9)&lt;br /&gt;
Classical Virtualization&lt;br /&gt;
* VMMs allow programs in virtual environments to run natively other than resource usage&lt;br /&gt;
** Dominant instructions executed directly on cpu&lt;br /&gt;
** vmm completely controls system resources&lt;br /&gt;
** often need to emulate every native instruction which would severely effect the performance&lt;br /&gt;
** sensitive instruction that violate safety and encapsulation&lt;br /&gt;
** vmm handles them as priviledged instructions&lt;br /&gt;
&lt;br /&gt;
x86 Virtualization&lt;br /&gt;
* virtualization in personal work stations rather than mainframes&lt;br /&gt;
** rings that allow isolation between virtual machines&lt;br /&gt;
** most privileged in ring 0 and least in ring 3. The operating system runs in ring 0 and user apps in ring 3&lt;br /&gt;
*** vmm in ring 0 and vms in lesser privilege rings (1 or 3)&lt;br /&gt;
*** guestOS believes its in ring 0&lt;br /&gt;
* address space compression, where to run the VMM&lt;br /&gt;
** if run using guest address space, guest can find out its virtualized or compromise the isolation&lt;br /&gt;
* does not trap all sensitive instructions but can handle them, violates classical virtualization description&lt;br /&gt;
* some privileged access fail without faulting&lt;br /&gt;
* interrupt virtualization - VMM handles AND guestOS handles&lt;br /&gt;
* binary translation - improve performance&lt;br /&gt;
* rewriting instructions and trapping before problems arrise&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* guestOS become exposed to vm information so that the guest is aware that it is virtualized and can make decisions based on this&lt;br /&gt;
* allows to avoid problem instructions&lt;br /&gt;
* Xen&lt;br /&gt;
* guestOS must be modified and is not natively running&lt;br /&gt;
**works with the hostOS to run efficiently&lt;br /&gt;
&lt;br /&gt;
VMM types&lt;br /&gt;
* hostedVMM - executes in hostOS and uses the drivers and support of the OS&lt;br /&gt;
* Stand-aloneVMM - runs directly on hardware and uses it&#039;s own drivers and services&lt;br /&gt;
* hybridVMM - runs a serviceOS where requests to hardware go through (I/O)&lt;br /&gt;
&lt;br /&gt;
Device Emulation&lt;br /&gt;
* implement real hardware in software&lt;br /&gt;
* completely virtual device that the guest interacts with&lt;br /&gt;
* mapped to physical hardware that handles the interactions but the emulation allows conversion&lt;br /&gt;
* allows the vm to be easily migrated between machines as it does not rely on the physical hardware&lt;br /&gt;
* allows having multiple vms and simplifies sharing (multiplexing)&lt;br /&gt;
* poor performance as the vmm needs to do a lot to virtulize the machine&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* modified guestOS to cooperate with VMM &lt;br /&gt;
* VMM does not have to do everything to handle device drivers&lt;br /&gt;
* not everything can be paravirtualized&lt;br /&gt;
* proprietary os and device drivers can&#039;t be paravirtualized&lt;br /&gt;
* still allows an increase in performance&lt;br /&gt;
* eventing or callback mechanism&lt;br /&gt;
** guestOS modifies interrupt mechs&lt;br /&gt;
* modifications are not applicable to all guestOS&lt;br /&gt;
&lt;br /&gt;
Dedicated Devices&lt;br /&gt;
* does not virtualize device but assigns directly to guest vm&lt;br /&gt;
* uses guest&#039;s drivers instead of host&lt;br /&gt;
* simplifiest vmm by removing handing of i/o securily&lt;br /&gt;
* limited physical devices that can be dedicated&lt;br /&gt;
* dificult to migrate vm as it depends on the pairing with this resource&lt;br /&gt;
* elims over-head of virtualization and simplicity in vmm&lt;br /&gt;
* direct memory access not supported&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
Not completely sure of the citation style used above.&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
Aaron .L&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.   Within a system, it has the most privileges.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time.  Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce  the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS.  The hostOS provides management for the VMM as well as allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. [6] This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM but the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation, the VMM provides a complete virtualization of a device for the guestOS to interact with, in software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance as the VMM must handle every request and convert them to be compatible with the physical device. [9] However, despite its poor performance, emulation is the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so that it is aware that it is a virtualized system. [9] As the guestOS is aware of this, it can now make better decisions about how it accesses devices. As the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. You can only use paravirtualization if you can implement the modifications to the guestOS. Not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between with the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent without having to deal with the VMM. This simplifies the VMM by eliminating the overhead in virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS and this also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;notes&#039;&#039;&#039;&lt;br /&gt;
- it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
I made some edits to the first two paragraphs. I just reworded some of the unclear sentences and some grammatical errors. I&#039;ll work on editing more of it after comp 3007. Also when all the parts are up i can go through it and link the paragraphs together so it can be read more like an essay  --[[User:Aellebla|Aellebla]] 15:18, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
So far so good, if you find some sentences that are off, go ahead and correct them, just note to us in here that you&#039;ve made changes. Almost done guys! -Slade&lt;br /&gt;
&lt;br /&gt;
==Potential Test Questions==&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3720</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3720"/>
		<updated>2010-10-14T13:31:30Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM. &lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS. The hostOS is provides management for the VMM as well as physical access to devices, hardware and drivers.&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM.&lt;br /&gt;
&lt;br /&gt;
[ I&#039;ll need to do some more research on the types of virtualization though before I can discuss that. If anyone has more information on them to put up in the points that would be helpful but I&#039;ll get to it right after class tomorrow morning. ]&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
Potential Test Questions&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3719</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3719"/>
		<updated>2010-10-14T13:31:02Z</updated>

		<summary type="html">&lt;p&gt;Slade: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM. &lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS. The hostOS is provides management for the VMM as well as physical access to devices, hardware and drivers.&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM.&lt;br /&gt;
&lt;br /&gt;
[ I&#039;ll need to do some more research on the types of virtualization though before I can discuss that. If anyone has more information on them to put up in the points that would be helpful but I&#039;ll get to it right after class tomorrow morning. ]&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
Potential Test Questions&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;br /&gt;
&lt;br /&gt;
Potential Test Questions&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3718</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3718"/>
		<updated>2010-10-14T13:30:05Z</updated>

		<summary type="html">&lt;p&gt;Slade: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM. &lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS. The hostOS is provides management for the VMM as well as physical access to devices, hardware and drivers.&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM.&lt;br /&gt;
&lt;br /&gt;
[ I&#039;ll need to do some more research on the types of virtualization though before I can discuss that. If anyone has more information on them to put up in the points that would be helpful but I&#039;ll get to it right after class tomorrow morning. ]&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
Potential Test Questions&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3704</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3704"/>
		<updated>2010-10-14T13:01:49Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM. &lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS. The hostOS is provides management for the VMM as well as physical access to devices, hardware and drivers.&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM.&lt;br /&gt;
&lt;br /&gt;
[ I&#039;ll need to do some more research on the types of virtualization though before I can discuss that. If anyone has more information on them to put up in the points that would be helpful but I&#039;ll get to it right after class tomorrow morning. ]&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3702</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3702"/>
		<updated>2010-10-14T12:58:14Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS, the OS actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components, the Virtual Machine Monitor, or VMM and the VM. &lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its own machine by handling any requests to resources and maintaining these requests with what has actually been provided to the VM by the hostOS. The hostOS is provides management for the VMM as well as physical access to devices, hardware and drivers.&lt;br /&gt;
&lt;br /&gt;
The VM is what contains the OS we are running through virtualization. This OS is called the guestOS and it will only be able to access any resources that have been made available to the VM by the hostOS. Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM.&lt;br /&gt;
&lt;br /&gt;
[ I&#039;ll need to do some more research on the types of virtualization though before I can discuss that. If anyone has more information on them to put up in the points that would be helpful but I&#039;ll get to it right after class tomorrow morning. ]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3590</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3590"/>
		<updated>2010-10-14T03:49:13Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3575</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3575"/>
		<updated>2010-10-14T03:08:28Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3470</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3470"/>
		<updated>2010-10-14T00:30:50Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* Unsorted */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -unassigned&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Conclusion -unassigned&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3441</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3441"/>
		<updated>2010-10-14T00:01:29Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -unassigned&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -unassigned&lt;br /&gt;
&lt;br /&gt;
Conclusion -unassigned&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3440</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3440"/>
		<updated>2010-10-13T23:58:56Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. &lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
We have our intro/thesis statement&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro, should have it ready by tonight. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
and conclusion&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3138</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3138"/>
		<updated>2010-10-13T00:48:56Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
We have our intro/thesis statement&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro, should have it ready by tonight. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
and conclusion&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2910</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2910"/>
		<updated>2010-10-11T15:43:32Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; VMM &#039;&#039;&#039;&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
&#039;&#039;&#039; VM &#039;&#039;&#039;&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
&#039;&#039;&#039; three approaches &#039;&#039;&#039;&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
We have our intro/thesis statement&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro, should have it ready by tonight. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
and conclusion&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2909</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2909"/>
		<updated>2010-10-11T15:38:19Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* The Essay */ new section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; VMM &#039;&#039;&#039;&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
&#039;&#039;&#039; VM &#039;&#039;&#039;&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
&#039;&#039;&#039; three approaches &#039;&#039;&#039;&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components&lt;br /&gt;
&lt;br /&gt;
We have our intro/thesis statement&lt;br /&gt;
&lt;br /&gt;
3 parts that prove it&lt;br /&gt;
&lt;br /&gt;
and conclusion&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2461</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2461"/>
		<updated>2010-10-07T13:02:48Z</updated>

		<summary type="html">&lt;p&gt;Slade: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2453</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=2453"/>
		<updated>2010-10-07T12:50:38Z</updated>

		<summary type="html">&lt;p&gt;Slade: Created page with &amp;#039;Exokernel- Minimalistic abstractions for developers Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give th…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to microkernels through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_1501_Presentations&amp;diff=2068</id>
		<title>COMP 1501 Presentations</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_1501_Presentations&amp;diff=2068"/>
		<updated>2008-12-03T19:37:18Z</updated>

		<summary type="html">&lt;p&gt;Slade: /* 14:30-15:00, Wednesday, December 3, 2008 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please add your names to a presentation time slot below.  To sign up, click &amp;quot;edit&amp;quot; beside the time you want and add your names. (Please do not edit this entire page please, that is a recipe for editing conflicts.)&lt;br /&gt;
&lt;br /&gt;
All presentations are in 5151 HP (the Gaming lab).&lt;br /&gt;
&lt;br /&gt;
==Monday, December 1, 2008==&lt;br /&gt;
&lt;br /&gt;
===10:00-10:30, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda: Murray Christopherson, Daniel Kelly - Raven&#039;s Trinity&lt;br /&gt;
&lt;br /&gt;
===10:30-11:00, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda: Henry Irving, Dan Peeler&lt;br /&gt;
&lt;br /&gt;
===11:00-11:30, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda:&lt;br /&gt;
Robert Wolfe and Paul Mauviel&lt;br /&gt;
&lt;br /&gt;
===11:30-12:00, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda: Drew Martin, Eva Demers-Brett&lt;br /&gt;
&lt;br /&gt;
===13:00-13:30, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Nicolas Porter and Idris Cameron  [LawnMaster]&lt;br /&gt;
&lt;br /&gt;
===13:30-14:00, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Daniel Hockey and Anthony D&#039;Angelo&lt;br /&gt;
&lt;br /&gt;
===14:00-14:30, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Neil Prowse and David Xiong&lt;br /&gt;
&lt;br /&gt;
===14:30-15:00, Monday, December 1, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: JP Landry, Derek Langlois&lt;br /&gt;
&lt;br /&gt;
==Tuesday, December 2, 2008==&lt;br /&gt;
&lt;br /&gt;
===10:00-10:30, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Jeff Francom, Spencer Elliott&lt;br /&gt;
&lt;br /&gt;
===10:30-11:00, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Spencer Winson &amp;amp; Sebastian Podlesny&lt;br /&gt;
&lt;br /&gt;
===11:00-11:30, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Gail: Nick Tierney and Jake Hendrick&lt;br /&gt;
&lt;br /&gt;
===11:30-12:00, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Erik Bitmanis&lt;br /&gt;
&lt;br /&gt;
===13:00-13:30, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau: Michael Evans, Robert Theoret&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Andrew Erdeg and Sebastian Schneider - Blood Hospital&lt;br /&gt;
&lt;br /&gt;
===13:30-14:00, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Jonathon Slonosky, Shawn Reichert&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Adam Saunders and Brendan Cooper (Bob The Tesseract)&lt;br /&gt;
&lt;br /&gt;
===14:00-14:30, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Tom Goldsmith and Rob Lavoie - The Fall of the Risen&lt;br /&gt;
&lt;br /&gt;
Somayaji, Oda: Trevor Malone and Wesley Lawrence - Rise of the Fallen&lt;br /&gt;
&lt;br /&gt;
===14:30-15:00, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Gail: Bruno Colantonio, Kevin McCreedy - Tuk&#039;s Revenge&lt;br /&gt;
&lt;br /&gt;
Somayaji, Oda: Daniel Sont, Mattieu Colverson&lt;br /&gt;
&lt;br /&gt;
===15:00-15:30, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Gail: Jeffery Luangpakham and Armando Taucer&lt;br /&gt;
&lt;br /&gt;
===15:30-16:00, Tuesday, December 2, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Gail: Matthew Chou and Yingfan Peng&lt;br /&gt;
&lt;br /&gt;
==Wednesday, December 3, 2008==&lt;br /&gt;
&lt;br /&gt;
===13:30-14:00, Wednesday, December 3, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda: sami zabarah ,yuanbin tang and  song chang&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Julie Powers and Austin Chamney&lt;br /&gt;
&lt;br /&gt;
===14:00-14:30, Wednesday, December 3, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda:Mira Richard-Fioramore et Joel Theresine&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty:    Scott Bennett and Nick Shires&lt;br /&gt;
&lt;br /&gt;
===14:30-15:00, Wednesday, December 3, 2008===&lt;br /&gt;
&lt;br /&gt;
Barbeau, Oda: David Krutsko and Stephany Lay&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Rovic Perdon, Rodrick McDonald, Clayton Shier&lt;br /&gt;
&lt;br /&gt;
===15:00-15:30, Wednesday, December 3, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Daniel Beimers and Arthur Marshall&lt;br /&gt;
&lt;br /&gt;
===15:30-16:00, Wednesday, December 3, 2008===&lt;br /&gt;
&lt;br /&gt;
Somayaji, Doherty: Martin Kugler &amp;amp; Paul MacDonald&lt;/div&gt;</summary>
		<author><name>Slade</name></author>
	</entry>
</feed>