<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Abown</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Abown"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Abown"/>
	<updated>2026-04-22T08:57:12Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6445</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6445"/>
		<updated>2010-12-02T17:42:23Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich.&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039; MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot1&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich. An Analysis of Linux Scalability to Many Cores. MIT CSAIL,2010, http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf&amp;lt;/span&amp;gt;&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot2&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot3&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update. In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;span id=&amp;quot;Foot4&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6444</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6444"/>
		<updated>2010-12-02T17:42:00Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich.&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039; MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
===memcached: &#039;&#039;Section 3.2&#039;&#039;===&lt;br /&gt;
memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Apache: &#039;&#039;Section 3.3&#039;&#039;===&lt;br /&gt;
Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===gmake: &#039;&#039;Section 3.5&#039;&#039;===&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.&amp;lt;sup&amp;gt;[[Foot1|1]]&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2] http://www.gnu.org/software/make/manual/make.html&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot1&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich. An Analysis of Linux Scalability to Many Cores. MIT CSAIL, http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf&amp;lt;/span&amp;gt;&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot2&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot3&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update. In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;span id=&amp;quot;Foot4&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6435</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6435"/>
		<updated>2010-12-02T17:34:04Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich.&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039; MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot2&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;span id=&amp;quot;Foot3&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update. In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;span id=&amp;quot;Foot4&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6429</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6429"/>
		<updated>2010-12-02T17:17:07Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* An Analysis of Linux Scalability to Many Cores */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich.&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039; MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6424</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6424"/>
		<updated>2010-12-02T17:11:47Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* An Analysis of Linux Scalability to Many Cores */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=An Analysis of Linux Scalability to Many Cores=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich.&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6423</id>
		<title>COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_1&amp;diff=6423"/>
		<updated>2010-12-02T17:11:34Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Paper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=An Analysis of Linux Scalability to Many Cores=&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris and Nickolai Zeldovich. &lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;MIT CSAIL&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6353</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6353"/>
		<updated>2010-12-02T15:55:37Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Paper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
==Paper==&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
==Research problem==&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
===Section 4.1 problems:===&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===Conclusion===&lt;br /&gt;
[[Of the entire essay...]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6352</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6352"/>
		<updated>2010-12-02T15:55:24Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Background Concepts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
==Research problem==&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
===Section 4.1 problems:===&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===Conclusion===&lt;br /&gt;
[[Of the entire essay...]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6351</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6351"/>
		<updated>2010-12-02T15:55:10Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Research problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
==Research problem==&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
===Section 4.1 problems:===&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===Conclusion===&lt;br /&gt;
[[Of the entire essay...]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6350</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6350"/>
		<updated>2010-12-02T15:54:57Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
===Section 4.1 problems:===&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===Conclusion===&lt;br /&gt;
[[Of the entire essay...]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6348</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6348"/>
		<updated>2010-12-02T15:54:28Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Section 4.1 problems: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===Conclusion===&lt;br /&gt;
[[Of the entire essay...]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
* Psearchy: &#039;&#039;Section 3.6&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Metis: &#039;&#039;Section 3.7&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6346</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6346"/>
		<updated>2010-12-02T15:54:01Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Contribution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
** Essay Conclusion - [[Nobody]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
* Psearchy: &#039;&#039;Section 3.6&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Metis: &#039;&#039;Section 3.7&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6344</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6344"/>
		<updated>2010-12-02T15:53:47Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
-- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath&lt;br /&gt;
&lt;br /&gt;
- I&#039;m working today, but I&#039;ll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wont be there either. that does not mean i wont/cant contribute. I&#039;ll be on msn or you can just email me. -kirill&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
* I&#039;m gonna whip something up for 4.2 since there appears to be nothing mentioned about it. -kirill&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* So here is the claims and unclaimed section. Add your name next to one if you want to take it on.&lt;br /&gt;
** gmake - Daniel B.&lt;br /&gt;
** memcached - Rannath&lt;br /&gt;
** Apache - Kirill&lt;br /&gt;
** [[(Exim, PostgreSQL, Metis, and Psearchy will not be needed as the professor said we only need to explain 3)]]&lt;br /&gt;
** Research Problem - Andrew&lt;br /&gt;
** Contribution - Rovic&lt;br /&gt;
** Critic, Style - Everyone&lt;br /&gt;
** Conclusion (also discussion) - Rannath, but I need someone to help flesh it out, I got the salient points down.&lt;br /&gt;
** References - Everyone&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table server. One instance running on many cores is bottlenecked by an internal lock. The MIT team ran one instance per-core to avoid the problem. Clients each connect to a single instance. This allows the server to simulate parallelism. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time. [1]&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to follow Amdahl&#039;s law (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
  &lt;br /&gt;
 - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
-I&#039;m just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP&lt;br /&gt;
&lt;br /&gt;
-==Work in Progress==-- -Rovic P.&lt;br /&gt;
This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.&lt;br /&gt;
&lt;br /&gt;
One reason the&lt;br /&gt;
required changes are modest is that stock Linux already&lt;br /&gt;
incorporates many modifications to improve scalability.&lt;br /&gt;
More speculatively, perhaps it is the case that Linux’s&lt;br /&gt;
system-call API is well suited to an implementation that&lt;br /&gt;
avoids unnecessary contention over kernel objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
This section describes a specific instance of unnecessary locking.&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion: &#039;&#039;Sections 6 &amp;amp; 7&#039;&#039;====&lt;br /&gt;
Everything so far indicates that MOSBENCH application can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix. &lt;br /&gt;
&lt;br /&gt;
We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the &amp;quot;stock&amp;quot; and &amp;quot;PK&amp;quot; implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org&#039;s wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [1]. In the end memcached was bottlenecked by the network card.&lt;br /&gt;
&lt;br /&gt;
[1] memcached&#039;s wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware. &#039;&#039;Which is not a problem as the paper specifically states that there are hardware limitations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing? Wrong? or use bad methodology?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
[1] Silas Boyd-Wickizer et al. &amp;quot;An Analysis of Linux Scalability to Many Cores&amp;quot;. In &#039;&#039;OSDI &#039;10, 9th USENIX Symposium on OS Design and Implementation&#039;&#039;, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.&lt;br /&gt;
&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;br /&gt;
&lt;br /&gt;
===Deprecated===&lt;br /&gt;
====Background Concepts====&lt;br /&gt;
* Exim: &#039;&#039;Section 3.1&#039;&#039;: &lt;br /&gt;
**Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
* PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;: &lt;br /&gt;
**As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
* Psearchy: &#039;&#039;Section 3.6&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Metis: &#039;&#039;Section 3.7&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6207</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6207"/>
		<updated>2010-12-02T05:39:55Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Research problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
- HP 3115 since there wont be a class in there (as its our tutorial and we know there won&#039;t be anyone there)&lt;br /&gt;
&lt;br /&gt;
- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want.&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time.&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
  my references are just below because it is easier for numbering the data later.&lt;br /&gt;
&lt;br /&gt;
As technological progress the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system[1]. The problem with a standard Linux operating is they are not designed for massive scalability which will soon be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic that situation makes sense because 48 cores are dividing the work. But when processing information a process the main goal is to finish so as long as possible every core should be doing a much work as possible.&lt;br /&gt;
  &lt;br /&gt;
To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.&lt;br /&gt;
&lt;br /&gt;
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002&lt;br /&gt;
&lt;br /&gt;
[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to be an inverse relationship (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion====&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs. In spite of this, there are also some assumptions or conditions that the paper fails to provide a fair explanation as to the inclusion/exclusion of.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
gmake:&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/manual/make.html gmake Manual]&lt;br /&gt;
&lt;br /&gt;
[http://www.gnu.org/software/make/ gmake Main Page]&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6069</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6069"/>
		<updated>2010-12-02T01:43:39Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Research problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
- [[I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions?]] - Daniel B.&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
* I will work on contribution - [[Rovic]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (Making it a perfect example of parallel programming). One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
gmake is an unofficial default benchmark in the Linux community which is used to build the Linux kernel. gmake is already quite parallel, creating more processes than cores, so that it can make proper use of multiple cores, and involves much reading and writing of files, as it is used to build the Linux kernel. gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution. gmake spends much of its execution time with its compiler, but still spend 7.6% of its time in system time.&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
 Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
 Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
==Section 4.1 problems:==&lt;br /&gt;
**The percentage of serialization in a program has a lot to do with how much an application can be sped up. As from the example in the paper, it seems to be an inverse relationship (e.g. 25% serialization --&amp;gt; limit of 4x speedup).&lt;br /&gt;
**Types of serializing interactions found in the MOSBENCH apps:&lt;br /&gt;
***Locking of shared data structure - increasing # of cores --&amp;gt; increase in lock wait time&lt;br /&gt;
***Writing to shared memory - increasing # of cores --&amp;gt; increase in wait for cache coherence protocol&lt;br /&gt;
***Competing for space in shared hardware cache - increasing # of cores --&amp;gt; increase in cache miss rate&lt;br /&gt;
***Competing for shared hardware resources - increasing # of cores --&amp;gt; increase in wait for resources&lt;br /&gt;
***Not enough tasks for cores --&amp;gt; idle cores&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Conclusion====&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content;&lt;br /&gt;
 be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs. In spite of this, there are also some assumptions or conditions that the paper fails to provide a fair explanation as to the inclusion/exclusion of.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system&#039;s caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6007</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6007"/>
		<updated>2010-12-01T23:52:11Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* To Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 630pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
* I&#039;ll do the Research problem and contribution sections. - [[Andrew]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads. Making it a perfect example of parallel programming. One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
 Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
 Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counter by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6005</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=6005"/>
		<updated>2010-12-01T23:50:19Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Claim Sections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Class and Notices=&lt;br /&gt;
(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.&lt;br /&gt;
&lt;br /&gt;
- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning&#039;s class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 630pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.&lt;br /&gt;
&lt;br /&gt;
=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Kirill Kashigin [mailto:k.kashigin@gmail.com k.kashigin@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Rovic Perdon [mailto:rperdon@gmail.com rperdon@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Sont [mailto:dan.sont@gmail.com dan.sont@gmail.com]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section &amp;amp; add links to supplementary info&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
* For starters I will take the Scalability Tutorial and gmake. Since the part for gmake is short in the paper, I will grab a few more sections later on. - [[Daniel B.]]&lt;br /&gt;
* Also, I will take sloppy counters as well - [[Daniel B.]] &lt;br /&gt;
* I&#039;m gonna put some work into the apache and postgresql sections - kirill&lt;br /&gt;
* Just as a note Anil in class Thuesday the 30th of November said that we only need to explain 3 of the applications and not all 7 - [[Andrew]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.&lt;br /&gt;
&lt;br /&gt;
They all work at MIT CSAIL.&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf The paper: An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
 Ideas to explain:&lt;br /&gt;
 - thread (maybe)&lt;br /&gt;
 - Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
 - Summarize scalability tutorial (Section 4.1 of the paper) focus on what makes something (non-)scalable&lt;br /&gt;
 - Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
====Exim: &#039;&#039;Section 3.1&#039;&#039;====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
====memchached: &#039;&#039;Section 3.2&#039;&#039;====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
====Apache: &#039;&#039;Section 3.3&#039;&#039;====&lt;br /&gt;
Apache is a web server. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads. Making it a perfect example of parallel programming. One thread to service incoming connections and various other threads to service those connections. On a single core processor, Apache spends 60% of its execution time in the kernel.&lt;br /&gt;
&lt;br /&gt;
====PostgreSQL: &#039;&#039;Section 3.4&#039;&#039;====&lt;br /&gt;
As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces  extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.&lt;br /&gt;
&lt;br /&gt;
====gmake: &#039;&#039;Section 3.5&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Psearchy: &#039;&#039;Section 3.6&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Metis: &#039;&#039;Section 3.7&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
 Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
 Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
 Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
   - So long as we cite the paper and don&#039;t pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.&lt;br /&gt;
&lt;br /&gt;
 Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
====Multicore packet processing: &#039;&#039;Section 4.2&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Sloppy counters: &#039;&#039;Section 4.3&#039;&#039;====&lt;br /&gt;
Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counter by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.&lt;br /&gt;
&lt;br /&gt;
====Lock-free comparison: &#039;&#039;Section 4.4&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
====Per-Core Data Structures: &#039;&#039;Section 4.5&#039;&#039;====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
====Eliminating false sharing: &#039;&#039;Section 4.6&#039;&#039;====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
====Avoiding unnecessary locking: &#039;&#039;Section 4.7&#039;&#039;====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
====Content(Fairness): &#039;&#039;Section 5&#039;&#039;====&lt;br /&gt;
 Fairness criterion:&lt;br /&gt;
 - does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
 - does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore neither of them has a particular advantage. That holds true for all seven programs.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
=====Apache: &#039;&#039;Section 5.4&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====PostgreSQL: &#039;&#039;Section 5.5&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====gmake: &#039;&#039;Section 5.6&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Psearchy: &#039;&#039;Section 5.7&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
=====Metis: &#039;&#039;Section 5.8&#039;&#039;=====&lt;br /&gt;
&lt;br /&gt;
====Style====&lt;br /&gt;
 Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
 - does the paper present information out of order?&lt;br /&gt;
 - does the paper present needless information?&lt;br /&gt;
 - does the paper have any sections that are inherently confusing?&lt;br /&gt;
 - is the paper easy to read through, or does it change subjects repeatedly?&lt;br /&gt;
 - does the paper have too many &amp;quot;long-winded&amp;quot; sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.&lt;br /&gt;
 - Check for grammar&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=5342</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_1&amp;diff=5342"/>
		<updated>2010-11-22T17:26:03Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Group members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
Patrick Young [mailto:Rannath@gmail.com Rannath@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Daniel Beimers [mailto:demongyro@gmail.com demongyro@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Andrew Bown [mailto:abown2@connect.carleton.ca abown2@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
=Methodology=&lt;br /&gt;
We should probably have our work verified by at least one group member before posting to the actual page&lt;br /&gt;
&lt;br /&gt;
=To Do=&lt;br /&gt;
*Improve the grammar/structure of the paper section&lt;br /&gt;
*Background Concepts -fill in info (fii)&lt;br /&gt;
*Research problem -fii&lt;br /&gt;
*Contribution -fii&lt;br /&gt;
*Critique -fii&lt;br /&gt;
*References -fii&lt;br /&gt;
&lt;br /&gt;
===Claim Sections===&lt;br /&gt;
* I claim Exim and memcached for background and critique -[[Rannath]]&lt;br /&gt;
* also per-core data structures, false sharing and unessesary locking for contribution -[[Rannath]]&lt;br /&gt;
&lt;br /&gt;
=Essay=&lt;br /&gt;
===Paper===&lt;br /&gt;
 The paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
Authors in order presented: Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich&lt;br /&gt;
&lt;br /&gt;
affiliation: MIT CSAIL&lt;br /&gt;
&lt;br /&gt;
[http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf An Analysis of Linux Scalability to Many Cores]&lt;br /&gt;
&lt;br /&gt;
===Background Concepts===&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
Ideas to explain:&lt;br /&gt;
#thread (maybe)&lt;br /&gt;
#Linux&#039;s move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)&lt;br /&gt;
#Summarize scalability tutorial (Section 4.1 of the paper)&lt;br /&gt;
#Describe the programs tested (what they do, how they&#039;re programmed (serial vs parallel), where to the do their processing)&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 3.1&#039;&#039;=====&lt;br /&gt;
Exim is a mail server for Unix. It&#039;s fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.&lt;br /&gt;
&lt;br /&gt;
=====memchached: &#039;&#039;Section 3.2&#039;&#039;=====&lt;br /&gt;
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.&lt;br /&gt;
&lt;br /&gt;
===Research problem===&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)&lt;br /&gt;
&lt;br /&gt;
Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)&lt;br /&gt;
&lt;br /&gt;
===Contribution===&lt;br /&gt;
 What was implemented? Why is it any better than what came before?&lt;br /&gt;
&lt;br /&gt;
Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn&#039;t unethical/illegal)?&lt;br /&gt;
&lt;br /&gt;
Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.&lt;br /&gt;
&lt;br /&gt;
=====Per-Core Data Structures=====&lt;br /&gt;
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.&lt;br /&gt;
&lt;br /&gt;
=====Eliminating false sharing=====&lt;br /&gt;
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.&lt;br /&gt;
&lt;br /&gt;
=====Avoiding unnecessary locking=====&lt;br /&gt;
Many locks/mutexes have special cases where they don&#039;t need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.&lt;br /&gt;
&lt;br /&gt;
===Critique===&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
Since this is a &amp;quot;my implementation is better then your implementation&amp;quot; paper the &amp;quot;goodness&amp;quot; of content can be impartially determined by its fairness and the honesty of the authors.&lt;br /&gt;
&lt;br /&gt;
Fairness criterion:&lt;br /&gt;
#does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)&lt;br /&gt;
#does the test put all tested implementations through the same test? (internal fairness)&lt;br /&gt;
&lt;br /&gt;
Style Criterion (feel free to add I have no idea what should go here):&lt;br /&gt;
#does the paper present information out of order?&lt;br /&gt;
#does the paper present needless information?&lt;br /&gt;
#does the paper have any sections that are inherently confusing?&lt;br /&gt;
&lt;br /&gt;
=====Testing Method: &#039;&#039;Section 5&#039;&#039;=====&lt;br /&gt;
Both the stock and new implementations use the same benchmarks, therefore internal fairness is preserved for all seven programs.&lt;br /&gt;
&lt;br /&gt;
=====Exim: &#039;&#039;Section 5.2&#039;&#039;=====&lt;br /&gt;
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - &amp;quot;as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.&lt;br /&gt;
&lt;br /&gt;
=====memcached: &#039;&#039;Section 5.3&#039;&#039;=====&lt;br /&gt;
memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_8&amp;diff=5039</id>
		<title>COMP 3000 Essay 2 2010 Question 8</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_8&amp;diff=5039"/>
		<updated>2010-11-16T13:44:51Z</updated>

		<summary type="html">&lt;p&gt;Abown: added skeleton&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Paper=&lt;br /&gt;
The paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4878</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4878"/>
		<updated>2010-11-09T14:33:37Z</updated>

		<summary type="html">&lt;p&gt;Abown: added one weakness of windows clusters&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This essay does a good job of discussing how Windows provides mainframe-class capabilities; it does not do a good job of addressing where it falls short.  --Anil&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Throughout the years, Windows has undergone some rather radical changes by modernizing existing technologies and providing innovation to existing features; this resulted in having functionality equivalent to that of a mainframe computer. However, although these changes have been extensive, Windows has not been particularly dominant when it comes to replacing modern mainframe systems.&lt;br /&gt;
&lt;br /&gt;
== Mainframes ==&lt;br /&gt;
&lt;br /&gt;
Mainframe systems have always had a good reputation for being used by large organizations in order to process thousands of small transactions. Whether these systems are used by the bank or by a police department, they possess several key features which make them exceedingly more powerful when compared to other systems. One of these features is an extensive and prolonged stability. This is a result of having tremendous redundancy and exception handling which prevents the entire system from shutting down, even if some components are inactive due to unforeseen circumstances.  Because of this, mainframe computers are incredibly reliable when it comes to data storage and interoperability.&lt;br /&gt;
&lt;br /&gt;
With this in mind, another neat feature that a mainframe possesses is the ability to hot swap components without taking the system offline. Consequently, components that are malfunctioning or require an upgrade can safely be replaced without endangering system stability. As a result mainframes gain a broad life spectrum as components can be upgraded individually without having to replace the entire system. Additionally, software written for these machines is extremely backwards compatible. The reason behind this is the fact that mainframe computers are fully virtualized. This is what allows a mainframe to run software that could have been written decades ago while still being able to run alongside modern software and hardware. In addition, this is part of the reason why mainframe computers are so secure, it is because they can use a combination of newer and older software as well as hardware to take years of innovation and combine it into one secure platform.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, all these features would mean nothing if the mainframe could not keep up with the data being sent and received. As a result, computers of this calibre must be able to have good I/O resource management as well as protect against bottlenecks. They do this by supporting powerful schedulers which ensure the fastest possible throughput for transaction processing [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html]. Without this, you could continuously be upgrading components but suffer diminishing returns.&lt;br /&gt;
&lt;br /&gt;
With so many features, how is Windows expected to keep up? The reality is Windows already supports most of these features. And when coupled with addon software such as VMWare and EMC storage solutions, the capabilities are even more astounding.&lt;br /&gt;
&lt;br /&gt;
==Redundancy ==&lt;br /&gt;
&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. &lt;br /&gt;
&lt;br /&gt;
There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system.This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time. If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime. However this service does not offer fault tolerance to the same extent as actual mainframes. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.But if the failure is on the Windows host machine then they will all fail. The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
== No downtime upgrades ==&lt;br /&gt;
&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed.&lt;br /&gt;
&lt;br /&gt;
== Backwards-Compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.)&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications. Like Microsoft Windows Application Compatibility Toolkit.This application can make the platform to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems. Virtualization (running an older version of the OS on top of the new one) can also be used to enable old applications to run.  (Windows 7 can do this with its virtualized XP subsystem.)  The third method is to use shims to create the backwards-compatibility. Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.&lt;br /&gt;
&lt;br /&gt;
== I/O and Resource Management ==&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm]&lt;br /&gt;
&lt;br /&gt;
== Weaknesses of Cluster Computing ==&lt;br /&gt;
Although are strong at duplicating duplicating mainframe capability it does have it&#039;s own weaknesses. One such weakness is that adding extra computers to a cluster may not cause the system to run faster. Depending on the type of process required to run there will be no increase in speed and in some cases a decrease in speed. When the a OSEM (Ordered subset expectation–maximization) algorithm or a Katsevich algorithm is run, the ideal number of computer in the cluster is in fact different. In the OSEM algorithm has a linear speed-up for each added node until the 10th node and then has a plateau of speed increase. Whereas the Katsevich algorithm has a linear speed increase until the 16th node added[http://www.cs.uiowa.edu/~jni/publications/2006a%20Tao%20He%20IMSCCS06.pdf]. So depending on the process the clusters can handle the operation better or worse. &lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Windows has gone from a Operation System specialize for personal computers to a platform to create a replacement for a mainframe. It quickly stripped away all the advantages that a normal mainframe has. But the largest threat Windows has to Mainframes is the cost. It gives anyone the ability to create a mainframe equivalent system with stock parts comparatively cheap price. Does this mean that the mainframe time is running out? Only Time will tell.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
=== Redundancy ===&lt;br /&gt;
&amp;quot;Setup for Failover Clustering and Microsoft Cluster Service&amp;quot; &amp;lt;http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003 Family&amp;quot; &amp;lt;http://msdn.microsoft.com/en-us/library/ms952401.aspx&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No downtime upgrades ===&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows 7 To Break Backwards Compatibility&amp;quot; &amp;lt;http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Mainframe computers&amp;quot; &amp;lt;http://computersight.com/computers/mainframe-computers/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Mainframe Features&amp;quot; &amp;lt;http://www.scribd.com/doc/6895677/Mainframe-Features&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Mapping the Mainframe to Windows: A Reference Architecture&amp;quot; &amp;lt;http://www.microsoft.com/windowsserver/mainframe/papers.mspx&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== I/O and Resource Management  ===&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Most Powerful Computers In The World&amp;quot; &amp;lt;http://hubpages.com/hub/Most-Powerful-Computers-In-The-World&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows Server 2003 Administrator’s Companion (MS Press, 2003)  &amp;quot;Overview of Microsoft Windows Compute Cluster Server 2003&amp;quot; &amp;lt;http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;grid computing&amp;quot; &amp;lt;http://searchdatacenter.techtarget.com/definition/grid-computing&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Clusters vs. Grids&amp;quot; &amp;lt;http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses of Cluster Computing ===&lt;br /&gt;
Tao He, Jun Ni and Ge Wang,&amp;quot;A Heterogeneous Windows Cluster System for Medical Image Reconstruction&amp;quot;,Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, IEEE, 2008, http://www.cs.uiowa.edu/~jni/publications/2006a%20Tao%20He%20IMSCCS06.pdf, (2010-11-09)&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_4_2010&amp;diff=4852</id>
		<title>COMP 3000 Lab 4 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_4_2010&amp;diff=4852"/>
		<updated>2010-11-02T14:39:50Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Pro-tip */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All of the following should be done with an Ubuntu 10.04 distribution or equivalent.  We recommend experimenting in a virtual environment because some of the exercises could make your system unbootable.  (In fact, take a snapshot of your working system before starting these exercises so you can easily revert.)&lt;br /&gt;
&lt;br /&gt;
==Questions==&lt;br /&gt;
&lt;br /&gt;
# Change the grub command line at boot to limit the total available RAM to 256M.  You&#039;ll need to get to select an entry and edit it from within grub.&lt;br /&gt;
# Add a new grub menu item which limits the standard kernel to 256M.&lt;br /&gt;
# Add a second virtual disk and make it bootable: put the kernel and initial ram disk on it and then install grub.  Can you boot off of this disk?  What does it do?  &lt;br /&gt;
# Examine the standard kernel&#039;s initial ram disk (initrd).  What program is first run in this environment?  What does it do?&lt;br /&gt;
# Modify the standard initial RAM disk so it pauses for 10 seconds and prints a message to the console on boot.&lt;br /&gt;
# What programs does upstart start on boot?&lt;br /&gt;
&lt;br /&gt;
==Hints==&lt;br /&gt;
&lt;br /&gt;
Please add your hints below to help your fellow students!&lt;br /&gt;
&lt;br /&gt;
=== Pro-tip ===&lt;br /&gt;
sudo update-grub (important!!! guess what it does!!!)&lt;br /&gt;
&lt;br /&gt;
===Kernel command line options===&lt;br /&gt;
&lt;br /&gt;
===GRUB configuration===&lt;br /&gt;
&lt;br /&gt;
*On Ubuntu the user configuration is stored in /etc/default/grub, while system grub configuration is in /etc/grub.d.  The main grub files are stored in /boot/grub.  You can update grub&#039;s config with the update-grub command.&lt;br /&gt;
*to limit RAM, go to /etc/default/grub, and change default boot options so that mem=256M is included.&lt;br /&gt;
&lt;br /&gt;
===How GRUB works===&lt;br /&gt;
&lt;br /&gt;
===Making a disk bootable===&lt;br /&gt;
&lt;br /&gt;
===Examining RAM disks===&lt;br /&gt;
&lt;br /&gt;
*Ubuntu (Debian) store initial RAM disks in the cpio format.  &#039;zcat &amp;lt;file&amp;gt; | cpio -i&#039; will extract its contents.&lt;br /&gt;
&lt;br /&gt;
===Upstart/init===&lt;br /&gt;
&lt;br /&gt;
Upstart &amp;quot;jobs&amp;quot; are config (.conf) files in /etc/init that require one of two options an &amp;quot;exec&amp;quot; line or a &amp;quot;script&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The exec line allows the upstart to just simply execute a script elsewhere, while script allows you to shell script in the upstart job.&lt;br /&gt;
&lt;br /&gt;
A special upstart job is rc.conf which maintains the original runlevel init.d scripts. You will see that rc.conf simply executes all the /etc/init.d scripts.&lt;br /&gt;
&lt;br /&gt;
See more on upstart jobs at http://upstart.ubuntu.com/getting-started.html&lt;br /&gt;
&lt;br /&gt;
For those using the newer versions of Ubuntu and thus using Grub2&lt;br /&gt;
https://help.ubuntu.com/community/Grub2&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_4_2010&amp;diff=4851</id>
		<title>COMP 3000 Lab 4 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_4_2010&amp;diff=4851"/>
		<updated>2010-11-02T14:39:37Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Hints */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All of the following should be done with an Ubuntu 10.04 distribution or equivalent.  We recommend experimenting in a virtual environment because some of the exercises could make your system unbootable.  (In fact, take a snapshot of your working system before starting these exercises so you can easily revert.)&lt;br /&gt;
&lt;br /&gt;
==Questions==&lt;br /&gt;
&lt;br /&gt;
# Change the grub command line at boot to limit the total available RAM to 256M.  You&#039;ll need to get to select an entry and edit it from within grub.&lt;br /&gt;
# Add a new grub menu item which limits the standard kernel to 256M.&lt;br /&gt;
# Add a second virtual disk and make it bootable: put the kernel and initial ram disk on it and then install grub.  Can you boot off of this disk?  What does it do?  &lt;br /&gt;
# Examine the standard kernel&#039;s initial ram disk (initrd).  What program is first run in this environment?  What does it do?&lt;br /&gt;
# Modify the standard initial RAM disk so it pauses for 10 seconds and prints a message to the console on boot.&lt;br /&gt;
# What programs does upstart start on boot?&lt;br /&gt;
&lt;br /&gt;
==Hints==&lt;br /&gt;
&lt;br /&gt;
Please add your hints below to help your fellow students!&lt;br /&gt;
&lt;br /&gt;
=== Pro-tip ===&lt;br /&gt;
sudo update-grub (important!!! guess what it does!!!&lt;br /&gt;
&lt;br /&gt;
===Kernel command line options===&lt;br /&gt;
&lt;br /&gt;
===GRUB configuration===&lt;br /&gt;
&lt;br /&gt;
*On Ubuntu the user configuration is stored in /etc/default/grub, while system grub configuration is in /etc/grub.d.  The main grub files are stored in /boot/grub.  You can update grub&#039;s config with the update-grub command.&lt;br /&gt;
*to limit RAM, go to /etc/default/grub, and change default boot options so that mem=256M is included.&lt;br /&gt;
&lt;br /&gt;
===How GRUB works===&lt;br /&gt;
&lt;br /&gt;
===Making a disk bootable===&lt;br /&gt;
&lt;br /&gt;
===Examining RAM disks===&lt;br /&gt;
&lt;br /&gt;
*Ubuntu (Debian) store initial RAM disks in the cpio format.  &#039;zcat &amp;lt;file&amp;gt; | cpio -i&#039; will extract its contents.&lt;br /&gt;
&lt;br /&gt;
===Upstart/init===&lt;br /&gt;
&lt;br /&gt;
Upstart &amp;quot;jobs&amp;quot; are config (.conf) files in /etc/init that require one of two options an &amp;quot;exec&amp;quot; line or a &amp;quot;script&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The exec line allows the upstart to just simply execute a script elsewhere, while script allows you to shell script in the upstart job.&lt;br /&gt;
&lt;br /&gt;
A special upstart job is rc.conf which maintains the original runlevel init.d scripts. You will see that rc.conf simply executes all the /etc/init.d scripts.&lt;br /&gt;
&lt;br /&gt;
See more on upstart jobs at http://upstart.ubuntu.com/getting-started.html&lt;br /&gt;
&lt;br /&gt;
For those using the newer versions of Ubuntu and thus using Grub2&lt;br /&gt;
https://help.ubuntu.com/community/Grub2&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4540</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4540"/>
		<updated>2010-10-15T05:19:33Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys nice work, sorry I didn&#039;t have time to add more to the essay today. I combined the essay into a FrankenEssay which is on the front page and added a conclusion. If read through it but if anyone notices a mistake I missed go ahead and correct it.&lt;br /&gt;
--[[User:Abown|Andrew Bown]] 1:16, 15 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Revised:&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4536</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4536"/>
		<updated>2010-10-15T05:14:58Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
Throughout the years, Windows has undergone some rather radical changes by modernizing existing technologies and providing innovation to existing features; this resulted in having functionality equivalent to that of a mainframe computer. However, although these changes have been extensive, Windows has not been particularly dominant when it comes to replacing modern mainframe systems.&lt;br /&gt;
&lt;br /&gt;
== Mainframes ==&lt;br /&gt;
&lt;br /&gt;
Mainframe systems have always had a good reputation for being used by large organizations in order to process thousands of small transactions. Whether these systems are used by the bank or by a police department, they possess several key features which make them exceedingly more powerful when compared to other systems. One of these features is an extensive and prolonged stability. This is a result of having tremendous redundancy and exception handling which prevents the entire system from shutting down, even if some components are inactive due to unforeseen circumstances.  Because of this, mainframe computers are incredibly reliable when it comes to data storage and interoperability.&lt;br /&gt;
&lt;br /&gt;
With this in mind, another neat feature that a mainframe possesses is the ability to hot swap components without taking the system offline. Consequently, components that are malfunctioning or require an upgrade can safely be replaced without endangering system stability. As a result mainframes gain a broad life spectrum as components can be upgraded individually without having to replace the entire system. Additionally, software written for these machines is extremely backwards compatible. The reason behind this is the fact that mainframe computers are fully virtualized. This is what allows a mainframe to run software that could have been written decades ago while still being able to run alongside modern software and hardware. In addition, this is part of the reason why mainframe computers are so secure, it is because they can use a combination of newer and older software as well as hardware to take years of innovation and combine it into one secure platform.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, all these features would mean nothing if the mainframe could not keep up with the data being sent and received. As a result, computers of this calibre must be able to have good I/O resource management as well as protect against bottlenecks. They do this by supporting powerful schedulers which ensure the fastest possible throughput for transaction processing [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html]. Without this, you could continuously be upgrading components but suffer diminishing returns.&lt;br /&gt;
&lt;br /&gt;
With so many features, how is Windows expected to keep up? The reality is Windows already supports most of these features. And when coupled with addon software such as VMWare and EMC storage solutions, the capabilities are even more astounding.&lt;br /&gt;
&lt;br /&gt;
===Redundancy ===&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.&lt;br /&gt;
&lt;br /&gt;
=== I/O and Resource Management ===&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Windows has gone from a Operation System specialize for personal computers to a platform to create a replacement for a mainframe. It quickly stripped away all the advantages that a normal mainframe has. But the largest threat Windows has to Mainframes is the cost. It gives anyone the ability to create a mainframe equivalent system with stock parts comparatively cheap price. Does this mean that the mainframe time is running out? Only Time will tell.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4533</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4533"/>
		<updated>2010-10-15T05:06:03Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* no downtime upgrades */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
Throughout the years, Windows has undergone some rather radical changes by modernizing existing technologies and providing innovation to existing features; this resulted in having functionality equivalent to that of a mainframe computer. However, although these changes have been extensive, Windows has not been particularly dominant when it comes to replacing modern mainframe systems.&lt;br /&gt;
&lt;br /&gt;
== Mainframes ==&lt;br /&gt;
&lt;br /&gt;
Mainframe systems have always had a good reputation for being used by large organizations in order to process thousands of small transactions. Whether these systems are used by the bank or by a police department, they possess several key features which make them exceedingly more powerful when compared to other systems. One of these features is an extensive and prolonged stability. This is a result of having tremendous redundancy and exception handling which prevents the entire system from shutting down, even if some components are inactive due to unforeseen circumstances.  Because of this, mainframe computers are incredibly reliable when it comes to data storage and interoperability.&lt;br /&gt;
&lt;br /&gt;
With this in mind, another neat feature that a mainframe possesses is the ability to hot swap components without taking the system offline. Consequently, components that are malfunctioning or require an upgrade can safely be replaced without endangering system stability. As a result mainframes gain a broad life spectrum as components can be upgraded individually without having to replace the entire system. Additionally, software written for these machines is extremely backwards compatible. The reason behind this is the fact that mainframe computers are fully virtualized. This is what allows a mainframe to run software that could have been written decades ago while still being able to run alongside modern software and hardware. In addition, this is part of the reason why mainframe computers are so secure, it is because they can use a combination of newer and older software as well as hardware to take years of innovation and combine it into one secure platform.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, all these features would mean nothing if the mainframe could not keep up with the data being sent and received. As a result, computers of this calibre must be able to have good I/O resource management as well as protect against bottlenecks. They do this by supporting powerful schedulers which ensure the fastest possible throughput for transaction processing [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html]. Without this, you could continuously be upgrading components but suffer diminishing returns.&lt;br /&gt;
&lt;br /&gt;
With so many features, how is Windows expected to keep up? The reality is Windows already supports most of these features. And when coupled with addon software such as VMWare and EMC storage solutions, the capabilities are even more astounding.&lt;br /&gt;
&lt;br /&gt;
===Redundancy ===&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.&lt;br /&gt;
&lt;br /&gt;
=== I/O and Resource Management ===&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Again I don&#039;t think a conclusion is necessary unless its like one sentence. --[[User:Dkrutsko|Dkrutsko]] 23:43, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4531</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4531"/>
		<updated>2010-10-15T05:05:40Z</updated>

		<summary type="html">&lt;p&gt;Abown: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
Throughout the years, Windows has undergone some rather radical changes by modernizing existing technologies and providing innovation to existing features; this resulted in having functionality equivalent to that of a mainframe computer. However, although these changes have been extensive, Windows has not been particularly dominant when it comes to replacing modern mainframe systems.&lt;br /&gt;
&lt;br /&gt;
== Mainframes ==&lt;br /&gt;
&lt;br /&gt;
Mainframe systems have always had a good reputation for being used by large organizations in order to process thousands of small transactions. Whether these systems are used by the bank or by a police department, they possess several key features which make them exceedingly more powerful when compared to other systems. One of these features is an extensive and prolonged stability. This is a result of having tremendous redundancy and exception handling which prevents the entire system from shutting down, even if some components are inactive due to unforeseen circumstances.  Because of this, mainframe computers are incredibly reliable when it comes to data storage and interoperability.&lt;br /&gt;
&lt;br /&gt;
With this in mind, another neat feature that a mainframe possesses is the ability to hot swap components without taking the system offline. Consequently, components that are malfunctioning or require an upgrade can safely be replaced without endangering system stability. As a result mainframes gain a broad life spectrum as components can be upgraded individually without having to replace the entire system. Additionally, software written for these machines is extremely backwards compatible. The reason behind this is the fact that mainframe computers are fully virtualized. This is what allows a mainframe to run software that could have been written decades ago while still being able to run alongside modern software and hardware. In addition, this is part of the reason why mainframe computers are so secure, it is because they can use a combination of newer and older software as well as hardware to take years of innovation and combine it into one secure platform.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, all these features would mean nothing if the mainframe could not keep up with the data being sent and received. As a result, computers of this calibre must be able to have good I/O resource management as well as protect against bottlenecks. They do this by supporting powerful schedulers which ensure the fastest possible throughput for transaction processing [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html]. Without this, you could continuously be upgrading components but suffer diminishing returns.&lt;br /&gt;
&lt;br /&gt;
With so many features, how is Windows expected to keep up? The reality is Windows already supports most of these features. And when coupled with addon software such as VMWare and EMC storage solutions, the capabilities are even more astounding.&lt;br /&gt;
&lt;br /&gt;
===Redundancy ===&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.&lt;br /&gt;
&lt;br /&gt;
=== I/O and Resource Management ===&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Again I don&#039;t think a conclusion is necessary unless its like one sentence. --[[User:Dkrutsko|Dkrutsko]] 23:43, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4529</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=4529"/>
		<updated>2010-10-15T05:02:25Z</updated>

		<summary type="html">&lt;p&gt;Abown: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
Throughout the years, Windows has undergone some rather radical changes by modernizing existing technologies and providing innovation to existing features; this resulted in having functionality equivalent to that of a mainframe computer. However, although these changes have been extensive, Windows has not been particularly dominant when it comes to replacing modern mainframe systems.&lt;br /&gt;
&lt;br /&gt;
== Mainframes ==&lt;br /&gt;
&lt;br /&gt;
Mainframe systems have always had a good reputation for being used by large organizations in order to process thousands of small transactions. Whether these systems are used by the bank or by a police department, they possess several key features which make them exceedingly more powerful when compared to other systems. One of these features is an extensive and prolonged stability. This is a result of having tremendous redundancy and exception handling which prevents the entire system from shutting down, even if some components are inactive due to unforeseen circumstances.  Because of this, mainframe computers are incredibly reliable when it comes to data storage and interoperability.&lt;br /&gt;
&lt;br /&gt;
With this in mind, another neat feature that a mainframe possesses is the ability to hot swap components without taking the system offline. Consequently, components that are malfunctioning or require an upgrade can safely be replaced without endangering system stability. As a result mainframes gain a broad life spectrum as components can be upgraded individually without having to replace the entire system. Additionally, software written for these machines is extremely backwards compatible. The reason behind this is the fact that mainframe computers are fully virtualized. This is what allows a mainframe to run software that could have been written decades ago while still being able to run alongside modern software and hardware. In addition, this is part of the reason why mainframe computers are so secure, it is because they can use a combination of newer and older software as well as hardware to take years of innovation and combine it into one secure platform.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, all these features would mean nothing if the mainframe could not keep up with the data being sent and received. As a result, computers of this calibre must be able to have good I/O resource management as well as protect against bottlenecks. They do this by supporting powerful schedulers which ensure the fastest possible throughput for transaction processing [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html]. Without this, you could continuously be upgrading components but suffer diminishing returns.&lt;br /&gt;
&lt;br /&gt;
With so many features, how is Windows expected to keep up? The reality is Windows already supports most of these features. And when coupled with addon software such as VMWare and EMC storage solutions, the capabilities are even more astounding.&lt;br /&gt;
&lt;br /&gt;
===Redundancy ===&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility ===&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.&lt;br /&gt;
=== Thin Client Terminals ===&lt;br /&gt;
&lt;br /&gt;
=== 64 Bit Support ===&lt;br /&gt;
&lt;br /&gt;
=== Better Multi-Core support ===&lt;br /&gt;
&lt;br /&gt;
=== Mass Storage Hot Swapping ===&lt;br /&gt;
&lt;br /&gt;
== Addon Software ==&lt;br /&gt;
&lt;br /&gt;
=== Virtualization ===&lt;br /&gt;
&lt;br /&gt;
- VMWare&amp;lt;br&amp;gt;&lt;br /&gt;
- Virtual Box&lt;br /&gt;
&lt;br /&gt;
=== Backup Solutions ===&lt;br /&gt;
&lt;br /&gt;
- EMC Storage solutions&amp;lt;br&amp;gt;&lt;br /&gt;
- Carbonite&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Again I don&#039;t think a conclusion is necessary unless its like one sentence. --[[User:Dkrutsko|Dkrutsko]] 23:43, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
=== no downtime upgrades ===&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Backwards-Compatibility&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=3800</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=3800"/>
		<updated>2010-10-14T15:14:02Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* High input/output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to stimulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
I can grab this section.&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010. It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to combine the cluster nodes to the main computer. &lt;br /&gt;
[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a]&lt;br /&gt;
[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World]&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=3789</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=3789"/>
		<updated>2010-10-14T15:12:29Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* High input/output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to stimulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
I can grab this section.&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010. It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to combine the cluster nodes to the main computer. &lt;br /&gt;
[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World]&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3299</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3299"/>
		<updated>2010-10-13T18:08:22Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* High input/output */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backward-compatibility is to add applications.Then the platfrom can be compatible with most softwares from early version.The other method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3186</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3186"/>
		<updated>2010-10-13T03:20:44Z</updated>

		<summary type="html">&lt;p&gt;Abown: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3185</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3185"/>
		<updated>2010-10-13T03:17:04Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3184</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3184"/>
		<updated>2010-10-13T03:16:44Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3182</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3182"/>
		<updated>2010-10-13T03:12:53Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3179</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3179"/>
		<updated>2010-10-13T03:11:25Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Introduction */  complete&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction although&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2907</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2907"/>
		<updated>2010-10-11T15:29:24Z</updated>

		<summary type="html">&lt;p&gt;Abown: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2828</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2828"/>
		<updated>2010-10-10T18:35:27Z</updated>

		<summary type="html">&lt;p&gt;Abown: added some extra information&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.msp&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca)&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2812</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2812"/>
		<updated>2010-10-10T17:41:59Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/topic/com.ibm.zos.zmainframe/zconc_mfhardware.htm &lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2811</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2811"/>
		<updated>2010-10-10T17:41:45Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/topic/com.ibm.zos.zmainframe/zconc_mfhardware.htm &lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2776</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=2776"/>
		<updated>2010-10-10T16:05:30Z</updated>

		<summary type="html">&lt;p&gt;Abown: added sections to answer&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
(unfortunately these aspects seem to be such common knowledge that I can&#039;t get a good reference for it) &lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality. &lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2769</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2769"/>
		<updated>2010-10-10T15:03:29Z</updated>

		<summary type="html">&lt;p&gt;Abown: moved link from answer to discussion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.msp&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2768</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2768"/>
		<updated>2010-10-10T15:00:57Z</updated>

		<summary type="html">&lt;p&gt;Abown: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.msp&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2497</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2497"/>
		<updated>2010-10-07T17:57:10Z</updated>

		<summary type="html">&lt;p&gt;Abown: /* Group 3 */  added paper&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-Qi Zhang (qzhang13@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2390</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2390"/>
		<updated>2010-10-06T02:48:17Z</updated>

		<summary type="html">&lt;p&gt;Abown: added email address to discussion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;/div&gt;</summary>
		<author><name>Abown</name></author>
	</entry>
</feed>