<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nshires</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nshires"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Nshires"/>
	<updated>2026-04-11T12:26:29Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6558</id>
		<title>COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6558"/>
		<updated>2010-12-02T23:00:19Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Data Collider: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Paper=&lt;br /&gt;
&#039;&#039;&#039;Effective Data-Race Detection  for the Kernel&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Paper: http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf&lt;br /&gt;
&lt;br /&gt;
Video: http://homeostasis.scs.carleton.ca/osdi/video/erickson.mp4&lt;br /&gt;
&lt;br /&gt;
Authors:  John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, Kirk Olynyk from Microsoft Research&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A data race is a potentially catastrophic event which can be alarmingly common in modern concurrent systems. When two threads access the same memory location at the same same time, and at least one of those accesses is a write operation, there exists a potential data race condition. If the race is not handled properly, it could have a wide range of negative consequences. In the best case, there might be corruption rendering the affected data unreadable; this may not be a major problem if there exist archived, non-corrupted versions of the data. In the worst case, a process (possibly even the kernel itself) may freak out and crash, unable to decide what to do about the unexpected input it receives.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Traditional dynamic data-race detection programs operate by running an isolated runtime and comparing it with the currently active runtime, to find situations that would have resulted in a data race if the runtimes were not isolated. DataCollider operates by temporarily setting up breakpoints at random memory access instances. If a certain memory access hits a breakpoint, DataCollider springs into action. The breakpoint causes the memory access instruction to be postponed, and so the instruction pretty much goes to sleep until DataCollider has finished its job. The job is like taking before and after photographs of something; DataCollider records the data stored at the address the instruction was attempting to access, then allows the instruction to execute. Then DataCollider records the data again. If the before and after records do not match, then another thread has tampered with the data at the same time that this instruction was trying to read it; this is precisely the definition of a data race.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Most existing data race detectors use static detection techniques. These involve analysing program source code to determine where simultaneous accesses occur. This method is typically seen as less effective because it produces a warning every time synchronous accesses occur; the program then has to sort out all the false warnings from the legitimate error reports. The problem is that there are no heuristics that can consistently eliminate the false warnings without also eliminating some of the legitimate reports. DataCollider uses a dynamic detection technique, which involves analysing program output and recognizing anomalous data accesses. Dynamic detectors also produce false warnings, but not nearly as often as static detectors.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that there is a problem with classic race detectors:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The main contribution that DataCollider provides is the unique idea of using hardware breakpoints in a data race detector. The question is why is a unique idea necessary. Why does DataCollider have to &amp;quot;reinvent the wheel&amp;quot;. There has been a plethora of race condition testers invented in the last two decades, and almost all of the dynamic data race detectors can be lumped into three categories. They either implement lock-set, happens-before, or a hybrid of the two types of detection. The research team for DataCollider looked at several of these implementations of race condition testers to find ways of improving their own program, and found that there are major problems in the classic ways of detecting race conditions. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/270000/265927/p391-savage.pdf?key1=265927&amp;amp;key2=7323721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors invented. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. Modern locking systems are also very complicated and have several different kinds of locks for different situations. It is difficult for one program to handle upwards of 12 types of locks, especially when they are very complicated. This does not take into account all of the benign problems such as date of access variables. Locking systems are notorious for reporting false positives such as this, and it is near impossible to change the architecture of the algorithm to ignore benign cases. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two potentially conflicting threads. If the two threads conflict, a data race is thrown, and the state of the program is saved. Pacer samples a percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack algorithm on each thread that accesses that part of memory. Similar to Pacer, DataCollider samples a percentage of the program&#039;s memory accesses, but instead of using vector-clocks to catch the second thread, hardware breakpoints are used. Pacer runs with an overhead of approximately one to three times the speed of the original program because it requires a fair amount of processing power to maintain the vector-clocks. Hardware break points are considerably faster than vector-clocks, and as a consequence, DataCollider runs with less overhead than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/1100000/1095832/p221-yu.pdf?key1=1095832&amp;amp;key2=8433721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
RaceTrack uses a unique technique in order to detect data races. The program being debugged is run on top of RaceTrack as a virtual machine using the .NET framework, and it will examine all of the memory accesses that the program requests. As soon as suspicious behavior is exhibited, a warning is sent off to be later evaluated when the program terminates. RaceTrack uses this technique because several process intensive inspections of the state of the machine must be checked, and doing this on the fly is expensive. There are many problems with RaceTrack. It is very successful at detecting a vast percentage of data races, however, it has a high overhead and requires extreme amounts of memory. RaceTrack must save the state of the entire machine every time a warning is produced, and it also has to save each threads memory accesses to check which memory access &amp;quot;happened before&amp;quot;. Since most warnings thrown are found to be benign, saving the state of the machine wastes computational power and memory. Long running programs also prove to be a problem, where the computer being debugged will run out of memory to store all of the warning states before the program terminates. It then will have to either increase overhead significantly to store the warnings on disk, or it will have to delete some warnings to make room for new ones. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;[http://docs.google.com/viewer?a=v&amp;amp;q=cache:C8gWk-H3GmEJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.73.9551%26rep%3Drep1%26type%3Dpdf+MultiRace:+Efficient+on-the-fly+data+race+detection+in+multithreaded+C%2B%2B+programs&amp;amp;hl=en&amp;amp;gl=ca&amp;amp;pid=bl&amp;amp;srcid=ADGEESj1jYlzXMOwgbh7SVntUsHxVeI1TvmkU8Oslkm-L9gq-NIyglj5eD48rtkcziUQUynmjOmZojsyzw_tBRiLN6T0n6iiDZyUiFjBUfLijQbzNsRpDQCsMpn-xTiIqK2PUj4DXwoM&amp;amp;sig=AHIEtbRBHpMvb5fel3XOi5oASAogumY-rg]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MultiRace is another hybrid style race condition debugger that uses two unique algorithms. The first algorithm, Djit is the happens-before iteration, which INSERT STUFF HERE. The second is an improved iteration of the lock-set algorithm. MultiRace is the most similar program to DataCollider in terms of their goals. Both strive to decrease overhead to near standard running times of the program itself, and to increase the program transparency for maximum user compatibility. MultiRace itself is several orders of magnitude more complicated than DataCollider, but since MultiRace hides its complexity from the user with transparency, it is still simple to use. It is arguable that MultiRace is superior for detecting races for C++ programs, however, MultiRace is not compatible with any other programming language. Since DataCollider uses hardware breakpoints, the coding language of the program is irrelevant. Also, since DataCollider avoids using both lock-set and happens before algorithms, it is versatile enough to even debug kernels. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DataCollider is a very unique program. Most other dynamic race condition testers can be lumped into the three groups lock-set, happens-before, or hybrid. DataCollider, however, recognizes the errors of these styles of detection, and manages to avoid them completely. Even though there are issues with false positives and benign races, DataCollider provides very simple, versatile, and lightweight functionality in debugging a program. Future programs may take this unique style of race detection and add their own functionality to improve upon it. It could be that DataCollider could inspire a ground breaking solution to race conditions and how to detect them.&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
This paper is well put together.  It has a strong flow and there is nothing that seems out of place.  The authors start with an introduction and then immediately identify key definitions that are used throughout the paper.  In the second section which follows the introduction the authors identify the definition of a Data-Race as it relates to their paper.  This is important since it is a key concept that is required to understand the entire paper.  This definition is required because as the authors state there is no standard for exactly how to define a data-race.[1] In addition to important definitions any background information that is relevant to this paper is presented at the beginning.  The key idea which the paper is based on in this case Data Collider and its implementation is explained. An evaluation and conclusion of Data Collider follow its description. The order of the sections makes sense and the author is not jumping around from one concept to another.  The organization of the sections and information provided make the paper easy to follow and understand.&lt;br /&gt;
&lt;br /&gt;
===Content===&lt;br /&gt;
=====Data Collider:=====&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered information to find the ten percent of data that actually contains real data race errors.[1] The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this. Even though DataCollider returns 90% false alarms the projects team have still been able to locate 25 errors in the Windows operating system. Of those 25 errors 12 have already been fixed.[1] This shows that DataCollider is an effective tool in locating data race errors within the kernel effectively enough that they can be corrected.&lt;br /&gt;
&lt;br /&gt;
The overhead of any application running is very important to all users.  The developers of DataCollider ran various tests to determine the overhead of running DataCollider based on the number of breakpoints.  These results were included in the final paper.  DataCollider has a low overall base overhead and it is only after 1000 breakpoints a second does the run time overhead increase drastically.[1]  This adds to the effectiveness of DataCollider.  Having a low overhead is very important to use of an application.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Erickson, Musuvathi, Burchhardt, Olynyk,&amp;lt;i&amp;gt; Effective Data-Race Detection for the Kernel&amp;lt;/i&amp;gt;, Microsoft Research, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf PDF]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6485</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6485"/>
		<updated>2010-12-02T19:11:47Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Alright so its due tomorrow.  I was hoping to get an idea of when everyone will be posting there completed sections thanks. --[[User:Azemanci|Azemanci]] 03:56, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Bujaki ==&amp;gt; abujaki [at] connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post their names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that there is a problem in classic solutions to race detectors:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The main contribution that DataCollider provides is the unique idea of using hardware breakpoints in a data race detector. The question is why is a unique idea necessary. Why does DataCollider have to &amp;quot;reinvent the wheel&amp;quot;. There has been a plethora of race condition testers invented in the last two decades, and all of the dynamic data race detectors can be lumped into three categories. They can either be lock-set, happens-before, or a hybrid of the two types of detection. The research team for DataCollider looked at several of these implementations of race condition testers to find ways of improving their own program, and to see that there are major problems in the classic ways of detecting race conditions. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/270000/265927/p391-savage.pdf?key1=265927&amp;amp;key2=7323721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/1100000/1095832/p221-yu.pdf?key1=1095832&amp;amp;key2=8433721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
RaceTrack uses a unique technique in order to detect data races. The program being debugged is run on top of RaceTrack as a virtual machine using the .NET framework, and it will examine all of the memory accesses that the program requests. As soon as suspicious behavior is exhibited, a warning is sent off to be later evaluated when the program terminates. RaceTrack uses this technique because several process intensive inspections of the state of the machine must be checked, and doing this on the fly is expensive. There are many problems with RaceTrack. It is very successful at detecting a vast percentage of data races, however, it has a high overhead and requires extreme amounts of memory. RaceTrack must save the state of the entire machine every time a warning is produced, and it also has to save each threads memory accesses to check which memory access &amp;quot;happened before&amp;quot;. Since most warnings thrown are found to be benign, saving the state of the machine wastes computational power and memory. Long running programs also prove to be a problem, where the computer being debugged will run out of memory to store all of the warning states before the program terminates. It then will have to either increase overhead significantly to store the warnings on disk, or it will have to delete some warnings to make room for new ones. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;[http://docs.google.com/viewer?a=v&amp;amp;q=cache:C8gWk-H3GmEJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.73.9551%26rep%3Drep1%26type%3Dpdf+MultiRace:+Efficient+on-the-fly+data+race+detection+in+multithreaded+C%2B%2B+programs&amp;amp;hl=en&amp;amp;gl=ca&amp;amp;pid=bl&amp;amp;srcid=ADGEESj1jYlzXMOwgbh7SVntUsHxVeI1TvmkU8Oslkm-L9gq-NIyglj5eD48rtkcziUQUynmjOmZojsyzw_tBRiLN6T0n6iiDZyUiFjBUfLijQbzNsRpDQCsMpn-xTiIqK2PUj4DXwoM&amp;amp;sig=AHIEtbRBHpMvb5fel3XOi5oASAogumY-rg]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MultiRace is another hybrid style race condition debugger that uses two unique algorithms. The first algorithm, Djit is the happens-before iteration, which INSERT STUFF HERE. The second is an improved iteration of the lock-set algorithm. MultiRace is the most similar program to DataCollider in terms of their goals. Both strive to decrease overhead to near standard running times of the program itself, and to increase the program transparency for maximum user compatibility. MultiRace itself is several orders of magnitude more complicated than DataCollider, but since MultiRace hides its complexity from the user with transparency, it is still simple to use. It is arguable that MultiRace is superior for detecting races for C++ programs, however, MultiRace is not compatible with any other programming language. Since DataCollider uses hardware breakpoints, the coding language of the program is irrelevant. Also, since DataCollider avoids using both lock-set and happens before algorithms, it is versatile enough to even debug kernels. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DataCollider is a very unique program. Most other dynamic race condition testers can be lumped into the three groups lock-set, happens-before, or hybrid. DataCollider, however, recognizes the errors of these styles of detection, and manages to avoid them completely. Even though there are issues with false positives and benign races, DataCollider provides very simple, versatile, and lightweight functionality in debugging a program. Future programs may take this unique style of race detection and add their own functionality to improve upon it. It could be that DataCollider could inspire a ground breaking solution to race conditions and how to detect them.&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
DataCollider:&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered data to find the ten percent of data that actually contains real data race errors. The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this.  Even though DataCollider returns 90% false alarms the projects team have still been able to locate 25 errors in the Windows operating system.  Of those 25 errors 12 have already been fixed.  This shows that DataCollider is an effective tool in locating data race errors within the kernel effectively enough that they can be corrected.&lt;br /&gt;
&lt;br /&gt;
feel free to add/edit anything [[User:Nshires|Nshires]] 02:54, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Right on thanks for that I was just about to start writing a section on data collider I&#039;m not really sure what else we can critique.--[[User:Azemanci|Azemanci]] 03:11, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a few things to the what you wrote and I also moved it to the main page. --[[User:Azemanci|Azemanci]] 03:22, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice :P maybe something on how they tested their program, and if thier testing was sufficient? [[User:Nshires|Nshires]] 19:11, 2 December 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6484</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6484"/>
		<updated>2010-12-02T19:11:34Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Alright so its due tomorrow.  I was hoping to get an idea of when everyone will be posting there completed sections thanks. --[[User:Azemanci|Azemanci]] 03:56, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Bujaki ==&amp;gt; abujaki [at] connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post their names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that there is a problem in classic solutions to race detectors:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The main contribution that DataCollider provides is the unique idea of using hardware breakpoints in a data race detector. The question is why is a unique idea necessary. Why does DataCollider have to &amp;quot;reinvent the wheel&amp;quot;. There has been a plethora of race condition testers invented in the last two decades, and all of the dynamic data race detectors can be lumped into three categories. They can either be lock-set, happens-before, or a hybrid of the two types of detection. The research team for DataCollider looked at several of these implementations of race condition testers to find ways of improving their own program, and to see that there are major problems in the classic ways of detecting race conditions. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/270000/265927/p391-savage.pdf?key1=265927&amp;amp;key2=7323721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;[http://delivery.acm.org/10.1145/1100000/1095832/p221-yu.pdf?key1=1095832&amp;amp;key2=8433721921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116768888&amp;amp;CFTOKEN=55577437]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
RaceTrack uses a unique technique in order to detect data races. The program being debugged is run on top of RaceTrack as a virtual machine using the .NET framework, and it will examine all of the memory accesses that the program requests. As soon as suspicious behavior is exhibited, a warning is sent off to be later evaluated when the program terminates. RaceTrack uses this technique because several process intensive inspections of the state of the machine must be checked, and doing this on the fly is expensive. There are many problems with RaceTrack. It is very successful at detecting a vast percentage of data races, however, it has a high overhead and requires extreme amounts of memory. RaceTrack must save the state of the entire machine every time a warning is produced, and it also has to save each threads memory accesses to check which memory access &amp;quot;happened before&amp;quot;. Since most warnings thrown are found to be benign, saving the state of the machine wastes computational power and memory. Long running programs also prove to be a problem, where the computer being debugged will run out of memory to store all of the warning states before the program terminates. It then will have to either increase overhead significantly to store the warnings on disk, or it will have to delete some warnings to make room for new ones. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;[http://docs.google.com/viewer?a=v&amp;amp;q=cache:C8gWk-H3GmEJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.73.9551%26rep%3Drep1%26type%3Dpdf+MultiRace:+Efficient+on-the-fly+data+race+detection+in+multithreaded+C%2B%2B+programs&amp;amp;hl=en&amp;amp;gl=ca&amp;amp;pid=bl&amp;amp;srcid=ADGEESj1jYlzXMOwgbh7SVntUsHxVeI1TvmkU8Oslkm-L9gq-NIyglj5eD48rtkcziUQUynmjOmZojsyzw_tBRiLN6T0n6iiDZyUiFjBUfLijQbzNsRpDQCsMpn-xTiIqK2PUj4DXwoM&amp;amp;sig=AHIEtbRBHpMvb5fel3XOi5oASAogumY-rg]&amp;lt;br&amp;gt;&lt;br /&gt;
combination of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MultiRace is another hybrid style race condition debugger that uses two unique algorithms. The first algorithm, Djit is the happens-before iteration, which INSERT STUFF HERE. The second is an improved iteration of the lock-set algorithm. MultiRace is the most similar program to DataCollider in terms of their goals. Both strive to decrease overhead to near standard running times of the program itself, and to increase the program transparency for maximum user compatibility. MultiRace itself is several orders of magnitude more complicated than DataCollider, but since MultiRace hides its complexity from the user with transparency, it is still simple to use. It is arguable that MultiRace is superior for detecting races for C++ programs, however, MultiRace is not compatible with any other programming language. Since DataCollider uses hardware breakpoints, the coding language of the program is irrelevant. Also, since DataCollider avoids using both lock-set and happens before algorithms, it is versatile enough to even debug kernels. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DataCollider is a very unique program. Most other dynamic race condition testers can be lumped into the three groups lock-set, happens-before, or hybrid. DataCollider, however, recognizes the errors of these styles of detection, and manages to avoid them completely. Even though there are issues with false positives and benign races, DataCollider provides very simple, versatile, and lightweight functionality in debugging a program. Future programs may take this unique style of race detection and add their own functionality to improve upon it. It could be that DataCollider could inspire a ground breaking solution to race conditions and how to detect them.&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
DataCollider:&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered data to find the ten percent of data that actually contains real data race errors. The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this.  Even though DataCollider returns 90% false alarms the projects team have still been able to locate 25 errors in the Windows operating system.  Of those 25 errors 12 have already been fixed.  This shows that DataCollider is an effective tool in locating data race errors within the kernel effectively enough that they can be corrected.&lt;br /&gt;
&lt;br /&gt;
feel free to add/edit anything [[User:Nshires|Nshires]] 02:54, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Right on thanks for that I was just about to start writing a section on data collider I&#039;m not really sure what else we can critique.--[[User:Azemanci|Azemanci]] 03:11, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a few things to the what you wrote and I also moved it to the main page. --[[User:Azemanci|Azemanci]] 03:22, 2 December 2010 (UTC)&lt;br /&gt;
Nice :P maybe something on how they tested their program, and if thier testing was sufficient? [[User:Nshires|Nshires]] 19:11, 2 December 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6100</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6100"/>
		<updated>2010-12-02T02:59:28Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Research Problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
Ill do Contribution: [[User:Achamney|Achamney]] 03:50, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that DataCollider is better:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The key part of the contribution of this essay is its competition. The research team for DataCollider looked at several other implementations of race condition testers to find ways of improving their own program, or to look for different ways of solving the same problem. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RacerX: Effective, Static Detection of Race Conditions and Deadlocks&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;HIGH OVERHEAD&amp;lt;/B&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve noticed a couple things for controversy, even though its not my topic&lt;br /&gt;
The biggest thing i saw was that dataCollider reports non-erroneous operations 90% of the time. This makes the user have to sift through all of the reports to separate the problems from the benign races. [[User:Achamney|Achamney]] 17:18, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
DataCollider:&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered data to find the ten percent of data that actually contains real data race errors. The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this.&lt;br /&gt;
&lt;br /&gt;
feel free to add/edit anything [[User:Nshires|Nshires]] 02:54, 2 December 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6099</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6099"/>
		<updated>2010-12-02T02:55:07Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic and two read/write commands may happen simultaneously. &lt;br /&gt;
&lt;br /&gt;
The reasearch team’s program DataCollider needs to detect error between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been many solutions to this problem in the past and there are many other ways of detecting these data race errors. &lt;br /&gt;
&lt;br /&gt;
One solution that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement.&lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
This is what I have so far, suggestions welcomed! [[User:Nshires|Nshires]] 22:38, 30 November 2010 (UTC)&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
Ill do Contribution: [[User:Achamney|Achamney]] 03:50, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that DataCollider is better:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The key part of the contribution of this essay is its competition. The research team for DataCollider looked at several other implementations of race condition testers to find ways of improving their own program, or to look for different ways of solving the same problem. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RacerX: Effective, Static Detection of Race Conditions and Deadlocks&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;HIGH OVERHEAD&amp;lt;/B&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve noticed a couple things for controversy, even though its not my topic&lt;br /&gt;
The biggest thing i saw was that dataCollider reports non-erroneous operations 90% of the time. This makes the user have to sift through all of the reports to separate the problems from the benign races. [[User:Achamney|Achamney]] 17:18, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
DataCollider:&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered data to find the ten percent of data that actually contains real data race errors. The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this.&lt;br /&gt;
&lt;br /&gt;
feel free to add/edit anything [[User:Nshires|Nshires]] 02:54, 2 December 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6098</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6098"/>
		<updated>2010-12-02T02:54:48Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic and two read/write commands may happen simultaneously. &lt;br /&gt;
&lt;br /&gt;
The reasearch team’s program DataCollider needs to detect error between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been many solutions to this problem in the past and there are many other ways of detecting these data race errors. &lt;br /&gt;
&lt;br /&gt;
One solution that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement.&lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
This is what I have so far, suggestions welcomed! [[User:Nshires|Nshires]] 22:38, 30 November 2010 (UTC)&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
Ill do Contribution: [[User:Achamney|Achamney]] 03:50, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that DataCollider is better:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The key part of the contribution of this essay is its competition. The research team for DataCollider looked at several other implementations of race condition testers to find ways of improving their own program, or to look for different ways of solving the same problem. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RacerX: Effective, Static Detection of Race Conditions and Deadlocks&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;HIGH OVERHEAD&amp;lt;/B&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve noticed a couple things for controversy, even though its not my topic&lt;br /&gt;
The biggest thing i saw was that dataCollider reports non-erroneous operations 90% of the time. This makes the user have to sift through all of the reports to separate the problems from the benign races. [[User:Achamney|Achamney]] 17:18, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
DataCollider:&lt;br /&gt;
DataCollider seems like a very innovative piece of software. It’s new use of breakpoints inside kernel-space instead of lock-set or happens-before methods in user-mode let it check data race errors in the very kernel itself without producing as much overhead as its old contenders (it even finds data races for overheads less than five percent). One thing to note about DataCollider is that ninety percent of its output to the user is false alarms. This means that after running DataCollider, the user has to sift through all of the gathered data to find the ten percent of data that actually contains real data race errors. The team of creator’s were able to create a to sort through all of the material it collects to only spit out the valuable information, but the creators still found some false alarms in the output . They have noted though that some users like to see the benign reports so that they can make design changes to their programs to make them more portable and scalable and therefore decided not to implement this.&lt;br /&gt;
fells free to add/edit anything [[User:Nshires|Nshires]] 02:54, 2 December 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6094</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=6094"/>
		<updated>2010-12-02T02:32:54Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
THINGS WE NEED TO DEFINE:&amp;lt;br&amp;gt;&lt;br /&gt;
* Happens-before reasoning&lt;br /&gt;
* Lock-set based reasoning&lt;br /&gt;
* &amp;lt;b&amp;gt;Hardware Breakpoints&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The prof seemed to be very focused on hardware breakpoints, so it is very important to define it well, and talk about it often, it looks like hardware breakpoints are the one thing thats setting DataCollider apart from other race detectors, so lets focus on it!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;IMPORTANT&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic and two read/write commands may happen simultaneously. &lt;br /&gt;
&lt;br /&gt;
The reasearch team’s program DataCollider needs to detect error between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been many solutions to this problem in the past and there are many other ways of detecting these data race errors. &lt;br /&gt;
&lt;br /&gt;
One solution that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement.&lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
This is what I have so far, suggestions welcomed! [[User:Nshires|Nshires]] 22:38, 30 November 2010 (UTC)&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
&lt;br /&gt;
Moved from main page: (p.s thanks for the info!)[[User:Nshires|Nshires]] 02:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
Ill do Contribution: [[User:Achamney|Achamney]] 03:50, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that DataCollider is better:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The key part of the contribution of this essay is its competition. The research team for DataCollider looked at several other implementations of race condition testers to find ways of improving their own program, or to look for different ways of solving the same problem. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
* PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
* LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
* MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&lt;br /&gt;
* RacerX: Effective, Static Detection of Race Conditions and Deadlocks&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the earlier data race detectors on the market. It may have been a useful and revolutionary program of its time, however, it uses very low level techniques compared to most data race detectors today. One of the reason why it is unsuccessful is because it only checks whether memory accesses use proper locking techniques. If a memory access is found that does not use a lock, then Eraser will report a data race. In many cases, the misuse of proper locking techniques is a conscious decision by the programmer, so Eraser will report many false positives. This also does not take into account all of the benign problems such as date of access variables. DataCollider used this source as an example of a lock-set based program, and why they are a poor choice for a race condition debugger. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Pacer, a happens-before reasoning data race detector, uses the FastTrack algorithm to detect data races. FastTrack uses vector-clocks to keep track of two threads, and find whether or not they are conflicting in any way. Pacer samples some percentage of each memory access, (from 1 to 3 percent) and runs the FastTrack happens-before algorithm on each thread that accesses that part of memory. DataCollider used this source as an example of the implementation of sampling. Similar to Pacer, DataCollider samples some memory accesses, but instead of using vector-clocks to catch the second thread, they use hardware break points. Hardware break points are considerably faster, and cause DataCollider to run much faster than Pacer.  &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace, similar to Pacer, samples a percentage of memory accesses from a program. Where it differs is the parts of memory that LiteRace samples the most. The &amp;quot;hot spot&amp;quot; regions of memory are ones that are accessed most by the program. Since they are accessed the most, chances are that they have already been successfully debugged, or if there are data races there, they are benign. LiteRace detects these areas in memory as hot spots, and samples them at a much lower rate. This improves LiteRace&#039;s chances of capturing a valid data race at a much lower sampling rate.  Where DataCollider bests LiteRace is based on LiteRace&#039;s installing mechanism. LiteRace needs to be recompiled into the software it is trying to debug, whereas DataColleder&#039;s breakpoints do not require any code changes to the program. This is a major success for DataCollider because often third party testers do not have the source code for a program. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackings&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;HIGH OVERHEAD&amp;lt;/B&amp;gt;[http://www.cs.ucla.edu/~dlmarino/pubs/pldi09.pdf]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve noticed a couple things for controversy, even though its not my topic&lt;br /&gt;
The biggest thing i saw was that dataCollider reports non-erroneous operations 90% of the time. This makes the user have to sift through all of the reports to separate the problems from the benign races. [[User:Achamney|Achamney]] 17:18, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6092</id>
		<title>COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6092"/>
		<updated>2010-12-02T02:31:48Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Research problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Paper=&lt;br /&gt;
&#039;&#039;&#039;Effective Data-Race Detection  for the Kernel&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Paper: http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf&lt;br /&gt;
&lt;br /&gt;
Video: http://homeostasis.scs.carleton.ca/osdi/video/erickson.mp4&lt;br /&gt;
&lt;br /&gt;
Authors:  John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, Kirk Olynyk from Microsoft Research&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A data race is a potentially catastrophic event which can be alarmingly common in modern concurrent systems. When one thread attempts to read or write on a memory location at the same time that another thread is writing on the same location, there exists a potential data race condition. If the race is not handled properly, it could have a wide range of negative consequences. In the best case, there might be data corruption rendering the affected files unreadable and useless; this may not be a major problem if there exist archived, non-corrupted versions of the data. In the worst case, a process (possibly even the operating system itself) may freak out and crash, unable to decide what to do about the unexpected input it receives.&lt;br /&gt;
&lt;br /&gt;
Traditional data-race detection programs operate by running an isolated runtime and comparing it with the currently active runtime, to find situations that would have resulted in a data race if the runtimes were not isolated. DataCollider operates by temporarily setting up breakpoints at random memory access instances. If a certain memory access hits a breakpoint, DataCollider springs into action. The breakpoint causes the memory access instruction to be postponed, and so the instruction pretty much goes to sleep until DataCollider has finished its job. The job is like taking a before and after photograph of something; DataCollider records the data stored at the address the instruction was attempting to access, then allows the instruction to execute. Then DataCollider records the data again. If the before and after records do not match, then another thread has tampered with the data at the same time that this instruction was trying to read it; this is precisely the definition of a data race.&lt;br /&gt;
&lt;br /&gt;
[Don&#039;t worry guys; that&#039;s not all I&#039;ve got. I&#039;m still working on it.]&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:56, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
This paper is well put together.  It has a strong flow and there is nothing that seems out of place.  The authors start with an introduction and then immediately identify key definitions that are used throughout the paper.  In the second section which follows the introduction the authors identify the definition of a Data-Race as it relates to their paper.  This is important since it is a key concept that is required to understand the entire paper.  This definition is required because as the authors state there is no standard for exactly how to define a data-race.[1] In addition to important definitions any background information that is relevant to this paper is presented at the beginning.  The key idea which the paper is based on in this case Data Collider and its implementation is explained. An evaluation and conclusion of Data Collider follow its description. The order of the sections makes sense and the author is not jumping around from one concept to another.  The organization of the sections and information provided make the paper easy to follow and understand.&lt;br /&gt;
&lt;br /&gt;
===Content===&lt;br /&gt;
=====Data Collider:=====&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Erickson, Musuvathi, Burchhardt, Olynyk,&amp;lt;i&amp;gt; Effective Data-Race Detection for the Kernel&amp;lt;/i&amp;gt;, Microsoft Research, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf PDF]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6091</id>
		<title>COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_6&amp;diff=6091"/>
		<updated>2010-12-02T02:30:31Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Paper=&lt;br /&gt;
&#039;&#039;&#039;Effective Data-Race Detection  for the Kernel&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Paper: http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf&lt;br /&gt;
&lt;br /&gt;
Video: http://homeostasis.scs.carleton.ca/osdi/video/erickson.mp4&lt;br /&gt;
&lt;br /&gt;
Authors:  John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, Kirk Olynyk from Microsoft Research&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A data race is a potentially catastrophic event which can be alarmingly common in modern concurrent systems. When one thread attempts to read or write on a memory location at the same time that another thread is writing on the same location, there exists a potential data race condition. If the race is not handled properly, it could have a wide range of negative consequences. In the best case, there might be data corruption rendering the affected files unreadable and useless; this may not be a major problem if there exist archived, non-corrupted versions of the data. In the worst case, a process (possibly even the operating system itself) may freak out and crash, unable to decide what to do about the unexpected input it receives.&lt;br /&gt;
&lt;br /&gt;
Traditional data-race detection programs operate by running an isolated runtime and comparing it with the currently active runtime, to find situations that would have resulted in a data race if the runtimes were not isolated. DataCollider operates by temporarily setting up breakpoints at random memory access instances. If a certain memory access hits a breakpoint, DataCollider springs into action. The breakpoint causes the memory access instruction to be postponed, and so the instruction pretty much goes to sleep until DataCollider has finished its job. The job is like taking a before and after photograph of something; DataCollider records the data stored at the address the instruction was attempting to access, then allows the instruction to execute. Then DataCollider records the data again. If the before and after records do not match, then another thread has tampered with the data at the same time that this instruction was trying to read it; this is precisely the definition of a data race.&lt;br /&gt;
&lt;br /&gt;
[Don&#039;t worry guys; that&#039;s not all I&#039;ve got. I&#039;m still working on it.]&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:56, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. &lt;br /&gt;
&lt;br /&gt;
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.&lt;br /&gt;
&lt;br /&gt;
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. &lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Just a few rough notes:&lt;br /&gt;
Research problem / challenges for traditional detectors:&lt;br /&gt;
&lt;br /&gt;
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).&lt;br /&gt;
&lt;br /&gt;
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it&#039;s nearly impossible to try and code a program that can catch all of them.&lt;br /&gt;
&lt;br /&gt;
- Some kernel modules can &amp;quot;speak privately&amp;quot; with hardware components, so you can&#039;t make a program that just logs all the kernel&#039;s interactions.&lt;br /&gt;
&lt;br /&gt;
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:Abondio2|Austin Bondio]] 01:57, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
This paper is well put together.  It has a strong flow and there is nothing that seems out of place.  The authors start with an introduction and then immediately identify key definitions that are used throughout the paper.  In the second section which follows the introduction the authors identify the definition of a Data-Race as it relates to their paper.  This is important since it is a key concept that is required to understand the entire paper.  This definition is required because as the authors state there is no standard for exactly how to define a data-race.[1] In addition to important definitions any background information that is relevant to this paper is presented at the beginning.  The key idea which the paper is based on in this case Data Collider and its implementation is explained. An evaluation and conclusion of Data Collider follow its description. The order of the sections makes sense and the author is not jumping around from one concept to another.  The organization of the sections and information provided make the paper easy to follow and understand.&lt;br /&gt;
&lt;br /&gt;
===Content===&lt;br /&gt;
=====Data Collider:=====&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Erickson, Musuvathi, Burchhardt, Olynyk,&amp;lt;i&amp;gt; Effective Data-Race Detection for the Kernel&amp;lt;/i&amp;gt;, Microsoft Research, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf PDF]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=5837</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=5837"/>
		<updated>2010-11-30T22:38:35Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Research Problem=&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The research problem being addressed by this paper is the detection of erroneous data races without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic and two read/write commands may happen simultaneously. &lt;br /&gt;
&lt;br /&gt;
The reasearch team’s program DataCollider needs to detect error between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been many solutions to this problem in the past and there are many other ways of detecting these data race errors. &lt;br /&gt;
&lt;br /&gt;
One solution that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement.&lt;br /&gt;
&lt;br /&gt;
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. &lt;br /&gt;
&lt;br /&gt;
This is what I have so far, suggestions welcomed! [[User:Nshires|Nshires]] 22:38, 30 November 2010 (UTC)&lt;br /&gt;
http://www.hpcaconf.org/hpca13/papers/014-zhou.pdf&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
Ill do Contribution: [[User:Achamney|Achamney]] 03:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;ve noticed a couple things for controversy, even though its not my topic&lt;br /&gt;
The biggest thing i saw was that dataCollider reports non-erroneous operations 90% of the time. This makes the user have to sift through all of the reports to separate the problems from the benign races. [[User:Achamney|Achamney]] 17:18, 22 November 2010 (UTC)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proving that DataCollider is better:&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The key part of the contribution of this essay is its competition. The research team for DataCollider looked at several other implementations of race condition testers to find ways of improving their own program, or to look for different ways of solving the same problem. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of the programs that were referenced were: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;br&amp;gt;&lt;br /&gt;
RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking&amp;lt;br&amp;gt;&lt;br /&gt;
PACER: Proportional Detection of Data Races&amp;lt;br&amp;gt;&lt;br /&gt;
LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;br&amp;gt;&lt;br /&gt;
MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;br&amp;gt;&lt;br /&gt;
RacerX: Effective, Static Detection of Race Conditions and Deadlocks&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Eraser: A Dynamic Data Race Detector for Multithreaded Programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
lock-set based reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Eraser, a data race detector programmed in 1997, was one of the first data race detectors on the market. It used fairly low level techniques to detect races. Most of the reason why it is unsuccessful is because it only checks whether &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PACER: Proportional Detection of Data Races&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LiteRace: Effective Sampling for Lightweight Data-Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FastTrack: Efficient and Precise Dynamic Race Detection&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Trackins&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;MultiRace: Efficient on-the-fly data race detection in multithreaded C++ programs&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
combo of lock-set and happens-before reasoning&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
Hey guys, sorry I&#039;m late to the party. I&#039;ll get started with Background Concepts. - [[user:abondio2|Austin Bondio]] 15:33, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the critique which will probably need more then one person and I&#039;ll also fill out the paper information section.--[[User:Azemanci|Azemanci]] 18:42, 23 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=5321</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=5321"/>
		<updated>2010-11-21T20:45:12Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- [[user:abondio2|Austin Bondio]] -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- David Krutsko :: dkrutsko at connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Who&#039;s Doing What&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I&#039;ll do &#039;Research Problem&#039; and help out with the &#039;Critique&#039; section, the professor said that part was pretty big [[User:Nshires|Nshires]] 20:45, 21 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4995</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4995"/>
		<updated>2010-11-15T18:20:43Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires nshires@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
- Andrew Zemancik andy.zemancik@gmail.com&lt;br /&gt;
&lt;br /&gt;
- Austin Bondio -&amp;gt; abondio2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
If everyone could just post there names and contact information.--[[User:Azemanci|Azemanci]] 02:57, 15 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4949</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4949"/>
		<updated>2010-11-14T19:08:24Z</updated>

		<summary type="html">&lt;p&gt;Nshires: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- Nicholas Shires&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4948</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_6&amp;diff=4948"/>
		<updated>2010-11-14T19:08:10Z</updated>

		<summary type="html">&lt;p&gt;Nshires: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Actual group members&amp;#039;&amp;#039;&amp;#039; - Nicholas Shires&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Actual group members&#039;&#039;&#039;&lt;br /&gt;
- Nicholas Shires&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4757</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4757"/>
		<updated>2010-10-15T12:07:28Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys nice work, sorry I didn&#039;t have time to add more to the essay today. I combined the essay into a FrankenEssay which is on the front page and added a conclusion. If read through it but if anyone notices a mistake I missed go ahead and correct it.&lt;br /&gt;
--[[User:Abown|Andrew Bown]] 1:16, 15 October 2010&lt;br /&gt;
&lt;br /&gt;
Yeah I think COMP 3008 and 3004 just wrecked us... Thank you for finding the time to combine it. Hope my introduction was good... I will read it over if we can ever finish the sequence diagrams... --[[User:Dkrutsko|Dkrutsko]] 07:02, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay, I think we finished the essay.I added some reference to the main page and all of them come from the discussion part.Everyone do a good job :) [[User:Zhangqi|Zhangqi]] 07:51, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah, we pulled it off! good job everyone :P [[User:Nshires|Nshires]] 12:07, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Revised:&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4466</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4466"/>
		<updated>2010-10-15T04:09:13Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Revised:&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4464</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4464"/>
		<updated>2010-10-15T04:08:28Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Revised:&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. &lt;br /&gt;
&lt;br /&gt;
Davis, David. &amp;quot;VMware vSphere hot-add RAM and hot-plug CPU.&amp;quot; TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. &amp;lt;http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html&amp;gt;.&lt;br /&gt;
&amp;quot;Windows Server 2008 R2 Datacenter.&amp;quot; Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. &amp;lt;http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&amp;gt;.&lt;br /&gt;
&amp;quot;Go-HotSwap: CompactPCI Hot Swap.&amp;quot; Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. &amp;lt;http://www.jungo.com/st/hotswap.html&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4438</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4438"/>
		<updated>2010-10-15T03:49:38Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Revised:&lt;br /&gt;
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors. &lt;br /&gt;
&lt;br /&gt;
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU&#039;s and guest OS&#039;s, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU. &lt;br /&gt;
&lt;br /&gt;
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed. feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4284</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4284"/>
		<updated>2010-10-15T00:35:15Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4281</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=4281"/>
		<updated>2010-10-15T00:33:44Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact info (qzhang13@connect.carleton.ca)&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-[[User:Zhangqi|Zhangqi]] 15:02, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;br /&gt;
&lt;br /&gt;
Hey guys i&#039;m not in your group but I found some useful information that could help you &lt;br /&gt;
http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start&lt;br /&gt;
&lt;br /&gt;
Okay found an article paper titled called&amp;quot;Mainframe Scalability in the Windows Environment&amp;quot;&lt;br /&gt;
http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.&lt;br /&gt;
&lt;br /&gt;
Folks, remember to do your discussions here.  Use four tildes to sign your entries, that adds time and date.  Email discussions won&#039;t count towards your participation grade...&lt;br /&gt;
[[User:Soma|Anil]] 15:43, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.&lt;br /&gt;
&lt;br /&gt;
Link to IBMs info on their mainframes --[[User:Lmundt|Lmundt]] 19:58, 7 October 2010 (UTC)&lt;br /&gt;
http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm&lt;br /&gt;
&lt;br /&gt;
Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as &#039;&#039;&#039;clustering&#039;&#039;&#039; which should help finding information.&lt;br /&gt;
Here&#039;s the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
hey,I agree with Andrew&#039;s idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC&#039;s storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I&#039;m confused... &lt;br /&gt;
In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC&#039;s storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it&#039;s feasible? &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 02:12, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Ah but the question isn&#039;t the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don&#039;t need to go into the history or applications of mainframes since that is not required by the phrasing of the question.&lt;br /&gt;
&lt;br /&gt;
~ Andrew Bown, 11:28 AM, October 11th 2010&lt;br /&gt;
&lt;br /&gt;
Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.&lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 19:57, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out &amp;quot;Window Internals 4th Edition&amp;quot; or &amp;quot;Windows Internals 5th Edition&amp;quot; by Mark Russinovich and David Solomon.&lt;br /&gt;
&lt;br /&gt;
--[[User:3maisons|3maisons]] 18:59, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &amp;lt;br&amp;gt;&lt;br /&gt;
Thanks Abown, just tweaked a couple of the sentences to improve flow [[User:Achamney|Achamney]] 01:13, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also, i removed this statement &amp;quot;Unfortunately, computers are only able to process data as fast as they can receive it&amp;quot;. I couldn&#039;t find a good place to plug it in.&lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --[[User:Dkrutsko|Dkrutsko]] 05:17, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
I have to agree this doesn&#039;t seem relevant to the question. --[[User:Dkrutsko|Dkrutsko]] 00:10, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:I&#039;ll attempt to re-write this paragraph for clarity and accuracy:&lt;br /&gt;
&lt;br /&gt;
:A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.&lt;br /&gt;
&lt;br /&gt;
:Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS).  This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time.  If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime.  However this service does not offer fault tolerance to the same extent as actual mainframes.&lt;br /&gt;
&lt;br /&gt;
:Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx&lt;br /&gt;
&lt;br /&gt;
:Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users.  If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail.  The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.&lt;br /&gt;
&lt;br /&gt;
:Let me know what you think.&lt;br /&gt;
:[[User:Brobson|Brobson]] 18:25, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
:According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware.  Hot-swapping demands zero downtime.  &lt;br /&gt;
:If you don&#039;t mind me suggesting but I don&#039;t think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe.  I think for hot-swapping we should focus on the hot-swapping of hardware components.  As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS&lt;br /&gt;
:&amp;lt;blockquote&amp;gt;&amp;quot;Hot Add/Replace Memory and Processors with supporting hardware&amp;quot;&amp;lt;/blockquote&amp;gt; http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx&lt;br /&gt;
&lt;br /&gt;
:If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.&lt;br /&gt;
&lt;br /&gt;
:I&#039;m also wondering if this should tie into scalability of a mainframe or if scalability should have it&#039;s own section.&lt;br /&gt;
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS&#039;s need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory and cpu with no downtime, it just cannot hot remove them. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.&lt;br /&gt;
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I&#039;ll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks for your suggetions.I have added some information to the paragraph.:)&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
~Andrew Bown (October 13 2:08) I&#039;ll write this paragraph.&lt;br /&gt;
I don&#039;t have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight.&lt;br /&gt;
~[User:Abown|Andrew Bown] (October 14th 11:12am)&lt;br /&gt;
Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&amp;amp;-AS400-printing_en.html&lt;br /&gt;
&lt;br /&gt;
The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx]. &lt;br /&gt;
&lt;br /&gt;
Microsoft&#039;s MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Massive Throughput ==&lt;br /&gt;
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such&lt;br /&gt;
&lt;br /&gt;
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe&#039;s throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops. &lt;br /&gt;
&lt;br /&gt;
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=ca&amp;amp;client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received. &lt;br /&gt;
&lt;br /&gt;
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.&lt;br /&gt;
&lt;br /&gt;
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]&lt;br /&gt;
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3282</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3282"/>
		<updated>2010-10-13T16:56:21Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* backwards-compatibility */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backward-compatibility is to add applications.Then the platfrom can be compatible with most softwares from early version.The other method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey, this sounds really good, I&#039;d add an example where you say &#039;one method to implement backward-compatibility is to add applications&#039;.&lt;br /&gt;
And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29&lt;br /&gt;
it pretty much intercepts the calls and changes them so that the old program can run on a new system.&lt;br /&gt;
Good Work, [[User:Nshires|Nshires]] 16:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3279</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3279"/>
		<updated>2010-10-13T16:47:05Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* hot swapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot swapping ==&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.&lt;br /&gt;
&lt;br /&gt;
These are the concepts I&#039;ve been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!  &lt;br /&gt;
&lt;br /&gt;
Sources:&lt;br /&gt;
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html&lt;br /&gt;
http://www.jungo.com/st/hotswap_windows.html&lt;br /&gt;
[[User:Nshires|Nshires]] 16:47, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.&lt;br /&gt;
&lt;br /&gt;
In Windows OS,one method to implement backward-compatibility is to add applications.Then the platfrom can be compatible with most softwares from early version.The other method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7&#039;s backwards-compatibility is not very good.If kernel is different, the OSs can&#039;t be compatible with each other.But it doesn&#039;t mean that older programs won&#039;t run, virtualization will be used to make them run. &lt;br /&gt;
&lt;br /&gt;
--[[User:Zhangqi|Zhangqi]] 08:34, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
ps. I didn&#039;t find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)&lt;br /&gt;
&lt;br /&gt;
http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/&lt;br /&gt;
 &lt;br /&gt;
http://computersight.com/computers/mainframe-computers/&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3200</id>
		<title>COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_3&amp;diff=3200"/>
		<updated>2010-10-13T04:10:22Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Redundancy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
To what extent do modern Windows systems provide mainframe-equivalent functionality? What about Windows coupled with add-on commercial products such as VMWare&#039;s virtualization and EMC&#039;s storage solutions? Explain.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Main Aspects of mainframes:&lt;br /&gt;
* redundancy which enables high reliability and security&lt;br /&gt;
* high input/output&lt;br /&gt;
* backwards-compatibility with legacy software&lt;br /&gt;
* support massive throughput&lt;br /&gt;
* Systems run constantly so they can be hot upgraded&lt;br /&gt;
http://www.exforsys.com/tutorials/mainframe/mainframe-features.html&lt;br /&gt;
&lt;br /&gt;
Linking sentence about how windows can duplicate mainframe functionality.&lt;br /&gt;
&lt;br /&gt;
here&#039;s the introduction ~ Abown (11:12 pm, October 12th 2010) &lt;br /&gt;
&lt;br /&gt;
Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by have tremendous redundancy which allows for mainframes to be extremely reliable which gives security when concerning data loss due to downtime. Also mainframes can be upgraded without taking the system down which allows for repairs, which further increase reliability. But after upgrading a mainframe the software does not change so they offer the features of backwards compatibility through virtualization so software never needs to be replaced, it just it processed quicker. But computers are only able to run as fast as the data they are receiving so mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest they support powerful schedulers which ensure the fastest throughput for processing transactions as possible.[http://www.exforsys.com/tutorials/mainframe/mainframe-features.html] With so many features how is Windows based system supposed to compete with a mainframe? The fact of the matter is their are features in Windows and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [http://www.vikingwaters.com/htmlpages/MFHistory.htm] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [http://www.thocp.net/hardware/univac.htm]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Achamney|Achamney]] 01:30, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. [[User:Abown|Andrew Bown]] 11:16, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider&#039;s off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe&#039;s multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(this is what I&#039;ve gotten out of some researching so far, comments and any edits/suggestions if I&#039;m on the right track or not are greatly apreciated :) ) &lt;br /&gt;
*note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don&#039;t lose everything*&lt;br /&gt;
&lt;br /&gt;
link to VMWare&#039;s cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf&lt;br /&gt;
&lt;br /&gt;
[[User:Nshires|Nshires]] 04:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== hot upgrades ==&lt;br /&gt;
&lt;br /&gt;
== backwards-compatibility ==&lt;br /&gt;
&lt;br /&gt;
== High input/output ==&lt;br /&gt;
&lt;br /&gt;
== massive throughput ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_9&amp;diff=2509</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_9&amp;diff=2509"/>
		<updated>2010-10-07T19:33:52Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Sources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Essay Format ==&lt;br /&gt;
&lt;br /&gt;
I started working on the main page.  The bullets are to be expanded. Other group are are working in their respective discussion pages but I think it&#039;s all right to put our work in progress on the front page.  Thoughts?--[[User:Lmundt|Lmundt]] 16:14, 6 October 2010 (UTC)&lt;br /&gt;
* [[User:Gbint|Gbint]] 02:03, 7 October 2010 (UTC) Lmundt;  what do you think of listing the capacities of the file system under major features?  I was thinking that we could overview the features in brief, then delve into each one individually.&lt;br /&gt;
* --[[User:Lmundt|Lmundt]] 14:31, 7 October 2010 (UTC) I was thinking about the major structure... I like what your suggesting in one section. So here is the structure I am thinking of.&lt;br /&gt;
&lt;br /&gt;
* Intro &lt;br /&gt;
* Section One ZFS&lt;br /&gt;
** Major feature 1&lt;br /&gt;
** Major feature 2&lt;br /&gt;
** Major feature 3 &lt;br /&gt;
* Section Two Legacy File Systems&lt;br /&gt;
** Legacy File System1( FAT32 ) - what it does&lt;br /&gt;
** Legacy File System2( ext2 ) - what it does&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Section Three Current File Systems&lt;br /&gt;
** NTFS?&lt;br /&gt;
** ext4?&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Section Four future file Systems&lt;br /&gt;
** BTRFS&lt;br /&gt;
** WinFS or ??&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
What does everyone think of this format?   While everyone should contribute to section one we could divvy up the rest.&lt;br /&gt;
&lt;br /&gt;
== Sources ==&lt;br /&gt;
&lt;br /&gt;
Not from your group. Found a file which goes to the heart of your problem&lt;br /&gt;
[http://www.oracle.com/technetwork/server-storage/solaris/overview/zfs-149902.pdf ZFSDatasheet]&lt;br /&gt;
[[User:Gautam|Gautam]] 22:50, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks will take a look at that.--[[User:Lmundt|Lmundt]] 16:12, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[User:Gbint|Gbint]] 01:45, 7 October 2010 (UTC) paper from Sun engineers explaining why they came to build ZFS, the problems they wanted to solve:  &lt;br /&gt;
* PDF:  http://www.timwort.org/classp/200_HTML/docs/zfs_wp.pdf&lt;br /&gt;
* HTML: http://74.125.155.132/scholar?q=cache:6Ex3KbFo4lYJ:scholar.google.com/+zettabyte+file+system&amp;amp;hl=en&amp;amp;as_sdt=2000&lt;br /&gt;
&lt;br /&gt;
Excellent article.[[User:Lmundt|Lmundt]] 14:24, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Not too exciting but it looks like an easy read http://arstechnica.com/hardware/news/2008/03/past-present-future-file-systems.ars [[User:Lmundt|Lmundt]] 14:40, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
the [http://en.wikipedia.org/wiki/Comparison_of_file_systems wikipedia comparison] has some good tables, and if you click the various categories you can learn quite a bit about the various important features //not your group. [[User:Rift|Rift]] 18:56, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey, I&#039;m not from your group but I found this slideshow that was really handy in the assignment! http://www.slideshare.net/Clogeny/zfs-the-last-word-in-filesystems - nshires&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_9&amp;diff=2508</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_9&amp;diff=2508"/>
		<updated>2010-10-07T19:31:40Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Sources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Essay Format ==&lt;br /&gt;
&lt;br /&gt;
I started working on the main page.  The bullets are to be expanded. Other group are are working in their respective discussion pages but I think it&#039;s all right to put our work in progress on the front page.  Thoughts?--[[User:Lmundt|Lmundt]] 16:14, 6 October 2010 (UTC)&lt;br /&gt;
* [[User:Gbint|Gbint]] 02:03, 7 October 2010 (UTC) Lmundt;  what do you think of listing the capacities of the file system under major features?  I was thinking that we could overview the features in brief, then delve into each one individually.&lt;br /&gt;
* --[[User:Lmundt|Lmundt]] 14:31, 7 October 2010 (UTC) I was thinking about the major structure... I like what your suggesting in one section. So here is the structure I am thinking of.&lt;br /&gt;
&lt;br /&gt;
* Intro &lt;br /&gt;
* Section One ZFS&lt;br /&gt;
** Major feature 1&lt;br /&gt;
** Major feature 2&lt;br /&gt;
** Major feature 3 &lt;br /&gt;
* Section Two Legacy File Systems&lt;br /&gt;
** Legacy File System1( FAT32 ) - what it does&lt;br /&gt;
** Legacy File System2( ext2 ) - what it does&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Section Three Current File Systems&lt;br /&gt;
** NTFS?&lt;br /&gt;
** ext4?&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Section Four future file Systems&lt;br /&gt;
** BTRFS&lt;br /&gt;
** WinFS or ??&lt;br /&gt;
** Contrast them with ZFS&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
What does everyone think of this format?   While everyone should contribute to section one we could divvy up the rest.&lt;br /&gt;
&lt;br /&gt;
== Sources ==&lt;br /&gt;
&lt;br /&gt;
Not from your group. Found a file which goes to the heart of your problem&lt;br /&gt;
[http://www.oracle.com/technetwork/server-storage/solaris/overview/zfs-149902.pdf ZFSDatasheet]&lt;br /&gt;
[[User:Gautam|Gautam]] 22:50, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks will take a look at that.--[[User:Lmundt|Lmundt]] 16:12, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[User:Gbint|Gbint]] 01:45, 7 October 2010 (UTC) paper from Sun engineers explaining why they came to build ZFS, the problems they wanted to solve:  &lt;br /&gt;
* PDF:  http://www.timwort.org/classp/200_HTML/docs/zfs_wp.pdf&lt;br /&gt;
* HTML: http://74.125.155.132/scholar?q=cache:6Ex3KbFo4lYJ:scholar.google.com/+zettabyte+file+system&amp;amp;hl=en&amp;amp;as_sdt=2000&lt;br /&gt;
&lt;br /&gt;
Excellent article.[[User:Lmundt|Lmundt]] 14:24, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Not too exciting but it looks like an easy read http://arstechnica.com/hardware/news/2008/03/past-present-future-file-systems.ars [[User:Lmundt|Lmundt]] 14:40, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
the [http://en.wikipedia.org/wiki/Comparison_of_file_systems wikipedia comparison] has some good tables, and if you click the various categories you can learn quite a bit about the various important features //not your group. [[User:Rift|Rift]] 18:56, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey, I&#039;m not from your group but I found this slideshow that was really handy in the assignment! http://www.slideshare.net/Clogeny/zfs-the-last-word-in-filesystems&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2491</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_3&amp;diff=2491"/>
		<updated>2010-10-07T16:02:32Z</updated>

		<summary type="html">&lt;p&gt;Nshires: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Group 3 == &lt;br /&gt;
Here&#039;s my email I&#039;ll add some of the stuff I find soon I&#039;m just saving the question for last.&lt;br /&gt;
Andrew Bown(abown2@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure if this is totally relevant, oh well.&lt;br /&gt;
-First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT&lt;br /&gt;
http://www.kernelthread.com/publications/virtualization/&lt;br /&gt;
&lt;br /&gt;
-achamney@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
An article about the mainframe.&lt;br /&gt;
-Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx&lt;br /&gt;
&lt;br /&gt;
-Qi Zhang (qzhang13@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Here&#039;s my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)&lt;br /&gt;
&lt;br /&gt;
Hey, Here&#039;s my contact info, nshires@connect.carleton.ca, I&#039;ll have some sources posted by the weekend hopefully&lt;/div&gt;</summary>
		<author><name>Nshires</name></author>
	</entry>
</feed>