Talk:COMP 3000 Essay 1 2010 Question 3: Difference between revisions

From Soma-notes
Achamney (talk | contribs)
Nshires (talk | contribs)
 
(15 intermediate revisions by 5 users not shown)
Line 54: Line 54:




Hey guys nice work, sorry I didn't have time to add more to the essay today. I combined the essay into a FrankenEssay which is on the front page and added a conclusion. If read through it but if anyone notices a mistake I missed go ahead and correct it.
--[[User:Abown|Andrew Bown]] 1:16, 15 October 2010


Yeah I think COMP 3008 and 3004 just wrecked us... Thank you for finding the time to combine it. Hope my introduction was good... I will read it over if we can ever finish the sequence diagrams... --[[User:Dkrutsko|Dkrutsko]] 07:02, 15 October 2010 (UTC)


Okay, I think we finished the essay.I added some reference to the main page and all of them come from the discussion part.Everyone do a good job :) [[User:Zhangqi|Zhangqi]] 07:51, 15 October 2010 (UTC)




 
Yeah, we pulled it off! good job everyone :P [[User:Nshires|Nshires]] 12:07, 15 October 2010 (UTC)




OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)
OLD VERSION - Here for the time being while optimizing some sections --[[User:Dkrutsko|Dkrutsko]] 00:20, 14 October 2010 (UTC)


=Answer=
=Answer=
Line 141: Line 144:
:I'm also wondering if this should tie into scalability of a mainframe or if scalability should have it's own section.
:I'm also wondering if this should tie into scalability of a mainframe or if scalability should have it's own section.
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)
:[[User:Brobson|Brobson]] 17:12, 14 October 2010 (UTC)
The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS's need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime.
I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I'll check out the Windows Server 2008 R2 Datacenter OS, Thanks [[User:Nshires|Nshires]] 00:33, 15 October 2010 (UTC)
Revised:
A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors.
Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU's and guest OS's, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU.
In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed.
Davis, David. "VMware vSphere hot-add RAM and hot-plug CPU." TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. <http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html>.
"Windows Server 2008 R2 Datacenter." Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. <http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx>.
"Go-HotSwap: CompactPCI Hot Swap." Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. <http://www.jungo.com/st/hotswap.html>.
feel free to edit [[User:Nshires|Nshires]] 03:49, 15 October 2010 (UTC)


== backwards-compatibility ==
== backwards-compatibility ==
Line 176: Line 197:


'''Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.'''
'''Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.'''
Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput?  [[User:Zhangqi|Zhangqi]] 22:38, 14 October 2010 (UTC)


== Massive Throughput ==
== Massive Throughput ==
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) <br>
[[User:Achamney|Achamney]] 01:09, 14 October 2010 (UTC) <br>
I can grab this section.


Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe's throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010. It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops.  
[[User:Achamney|Achamney]] 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such
 
Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe's throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops.
 
The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&cd=1&hl=en&ct=clnk&gl=ca&client=firefox-a] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received.
 
Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[http://searchdatacenter.techtarget.com/definition/grid-computing] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.


The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes. Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received.  
In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[http://www.dba-oracle.com/real_application_clusters_rac_grid/grid_vs_clusters.htm]


Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems. Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.


In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require more than one problem to be solved at a time, and there is no competition to the usability of a cluster server as compared to the grid model.




[http://webcache.googleusercontent.com/search?q=cache:EPlDExBxmDYJ:download.microsoft.com/download/9/e/d/9edcdeab-f1fb-4670-8914-c08c5c6f22a5/HPC_Overview.doc+Windows+Compute+Cluster+Server&cd=1&hl=en&ct=clnk&gl=ca&client=firefox-a]
[http://hubpages.com/hub/Most-Powerful-Computers-In-The-World]
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]
[http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc3/c3thru.html]
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]
[http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213140,00.html]
[http://searchdatacenter.techtarget.com/definition/grid-computing]

Latest revision as of 12:07, 15 October 2010

Group 3

Here's my email I'll add some of the stuff I find soon I'm just saving the question for last. Andrew Bown(abown2@connect.carleton.ca)

I'm not sure if this is totally relevant, oh well. -First time sharing system CTSS (Compatible Time Sharing System) in the 1950s. Created at MIT http://www.kernelthread.com/publications/virtualization/

-achamney@connect.carleton.ca

Here's my contact info (qzhang13@connect.carleton.ca) An article about the mainframe. -Mainframe Migration http://www.microsoft.com/windowsserver/mainframe/migration.mspx

-Zhangqi 15:02, 7 October 2010 (UTC)

Here's my contact information, look forward to working with everyone. - Ben Robson (brobson@connect.carleton.ca)

Hey, Here's my contact info, nshires@connect.carleton.ca, I'll have some sources posted by the weekend hopefully

Hey guys i'm not in your group but I found some useful information that could help you http://en.wikipedia.org/wiki/Mainframe_computer i know we are not suppose to use wiki references but its a good place to start

Okay found an article paper titled called"Mainframe Scalability in the Windows Environment" http://new.cmg.org/proceedings/2003/3023.pdf (required registration to access but is free)~ Andrew (abown2@connect.carleton.ca)sometime friday.

Folks, remember to do your discussions here. Use four tildes to sign your entries, that adds time and date. Email discussions won't count towards your participation grade... Anil 15:43, 8 October 2010 (UTC)

Okay going to break the essay into points paragraphs on the main page which people can choose one paragraph to write. Then after all paragraphs are written we will communally edit it to have a cohesive voice. It is the only way I can viably think of to properly distribute the work. ~Andrew (abown2@connect.carleton.ca) 11:00 am, 10 October 2010.

Link to IBMs info on their mainframes --Lmundt 19:58, 7 October 2010 (UTC) http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zmainframe/zconc_valueofmf.htm

Just made the revelation that when trying to find information on the Windows equivalent to mainframe is refered to as clustering which should help finding information. Here's the wiki article on the technology for an overview http://en.wikipedia.org/wiki/Microsoft_Cluster_Server ~ Andrew (abown2@connect.carleton.ca

hey,I agree with Andrew's idea. We should break the essay into several sections and work it together.From my point of view, I think we should focus on how Windows provide the mainframe functionality and the VMware and EMC's storage should be our examples. As listed on the main page, there are many advantages and disadvantages of the mainframe.But where is Windows? I'm confused... In my opinion, the first paragraph can introduct the mainframe (such as the history,features,application,etc) and what mainframe-equivalent functionality Windows support. Then we can use some paragraphs to discuss the functionalities in details. And VMware and EMC's storage solution also can be involved in this part. At last we make a conclusion of the whloe essay. Do you think it's feasible?

--Zhangqi 02:12, 11 October 2010 (UTC)

Ah but the question isn't the pros and cons of each. It is how to get mainframe functionality from a Windows Operating System. How I split up the essay has each paragraph focusing on one aspect of mainframes and how it can be duplicated in windows either with windows tools or 3rd party software. You don't need to go into the history or applications of mainframes since that is not required by the phrasing of the question.

~ Andrew Bown, 11:28 AM, October 11th 2010

Okay, I think I catch your meaning. So now we should do is to edit the content of each paragragh as soon as possible. Time is limited.

--Zhangqi 19:57, 11 October 2010 (UTC)

If you guys are looking for an authoritative source on how Windows works, I *highly* recommend checking out "Window Internals 4th Edition" or "Windows Internals 5th Edition" by Mark Russinovich and David Solomon.

--3maisons 18:59, 12 October 2010 (UTC)


Hey guys nice work, sorry I didn't have time to add more to the essay today. I combined the essay into a FrankenEssay which is on the front page and added a conclusion. If read through it but if anyone notices a mistake I missed go ahead and correct it. --Andrew Bown 1:16, 15 October 2010

Yeah I think COMP 3008 and 3004 just wrecked us... Thank you for finding the time to combine it. Hope my introduction was good... I will read it over if we can ever finish the sequence diagrams... --Dkrutsko 07:02, 15 October 2010 (UTC)

Okay, I think we finished the essay.I added some reference to the main page and all of them come from the discussion part.Everyone do a good job :) Zhangqi 07:51, 15 October 2010 (UTC)


Yeah, we pulled it off! good job everyone :P Nshires 12:07, 15 October 2010 (UTC)


OLD VERSION - Here for the time being while optimizing some sections --Dkrutsko 00:20, 14 October 2010 (UTC)

Answer

added introduction points and sections for each paragraph so you guys can edit one paragraph at a time instead of the whole document. If you want to claim a certain paragram just put your name into the section first. ~ Andrew (abown2@connect.carleton.ca) 12:00 10th of October 2010

Introduction

Main Aspects of mainframes:

  • redundancy which enables high reliability and security
  • high input/output
  • backwards-compatibility with legacy software
  • support massive throughput
  • Systems run constantly so they can be hot upgraded

http://www.exforsys.com/tutorials/mainframe/mainframe-features.html

Linking sentence about how windows can duplicate mainframe functionality.

here's the introduction ~ Abown (11:12 pm, October 12th 2010)
Thanks Abown, just tweaked a couple of the sentences to improve flow Achamney 01:13, 14 October 2010 (UTC)

Also, i removed this statement "Unfortunately, computers are only able to process data as fast as they can receive it". I couldn't find a good place to plug it in.

Mainframes have been always used for large corporations to process thousands of small transactions, but what strengths allow for mainframes to be useful in their purpose. Mainframes are extremely useful in business because they are designed to run without downtime. This is achieved by having tremendous redundancy which allows for mainframes to be extremely reliable. This also gives security when concerning data loss due to downtime. Mainframes can be upgraded without taking the system down to allow for repairs, which further increases reliability. After upgrading a mainframe, however, the software does not change, so they can offer the features of backwards compatibility through virtualization; software never needs to be replaced. Mainframes support high input/output so that the mainframe is always being utilized. To make sure mainframes are utilized to their fullest, they support powerful schedulers which ensure the fastest throughput for processing transactions as fast as possible. [1] With so many features, how are Windows based systems supposed to compete with a mainframe? The fact of the matter is that there are features in Windows, and software solutions which can duplicate these features in a Windows environment. Be it redundancy, real-time upgrading, virtualization, high input/output or utilizing resources.

Using this paragraph and my solution on the assignment I was able to expand on this topic. It is in the main page at the moment, see if you like it, add anything you think I missed --Dkrutsko 05:17, 14 October 2010 (UTC)

History

Before comparing Windows systems and mainframes, the history of what mainframes were used for and where they came from must be understood. The first official mainframe computer was the UNIVAC I. [2] It was designed for the U.S. Census Bureau by J. Presper Eckert and John Mauchly. [3]. By this point in history, there were no personal computers, and the only people who could afford a computer were massive businesses. The main functionality of these mainframes were to calculate company payrolls, sales records, analyze sales performance, and store all company information.
Achamney 01:30, 12 October 2010 (UTC)

This doesn't seem to actually be pertinent to the question at hand. Question does not have any indication of the need to provide a history. Andrew Bown 11:16, 12 October 2010

I have to agree this doesn't seem relevant to the question. --Dkrutsko 00:10, 14 October 2010 (UTC)

Redundancy

Nshires 04:10, 13 October 2010 (UTC) A large feature of mainframes is their ability for redundancy. Mainframes produce redundancy by using the provider's off-site redundancy faeture. This feature lets the customer move all of their processes and applications onto the providers mainframe while the provider makes repairs on the customers system. Another way that mainframes create redundancy is their use of multi-processors that share the same memory. If one processor dies, the rest of the processors still keep all of the cache. There are multiple ways windows systems can create this redundancy feature that mainframes have. The first way windows systems can create this is by creating a windows cluster server. The cluster uses the same feature of the mainframe's multi-processor system. Another way windows systems can create redundancy is by using virtual machines. VMWare has a feature called Microsoft Cluster Service, which allows users to create a cluster of virtual machines on one physical windows system (or multiple physical machines). The virtual machines set up two different networks. They create a private network for communication in between the virtual machines and then a public network to control I/O services. The virtual machines also share storage to create concurrency so that if one fails, the other still has all of the data.


(this is what I've gotten out of some researching so far, comments and any edits/suggestions if I'm on the right track or not are greatly apreciated :) )

  • note: This is the second time I have written this, make sure to save whatever you edit in notepad or whatever first so that you don't lose everything*

link to VMWare's cluster virtualization http://www.vmware.com/pdf/vsphere4/r40/vsp_40_mscs.pdf

Nshires 04:10, 13 October 2010 (UTC)


I'll attempt to re-write this paragraph for clarity and accuracy:
A feature provided by mainframes is their ability to create redundancy in terms of data storage and parallel processing. Windows can mimic expandable storage and storage redundancy through out-sourced storage solutions.
Processing redundancy for Windows can be created through the Microsoft Cluster Service (MSCS). This service allows multiple Windows machines to be connected as nodes in a cluster; where each node has the same applications and only one node is online at any point in time. If a node in the cluster fails, another will take over. The failing node can then be restarted or replaced without serious downtime. However this service does not offer fault tolerance to the same extent as actual mainframes.
Source: http://msdn.microsoft.com/en-us/library/ms952401.aspx
Virtual machine nodes can be used in place of physical machine nodes in a cluster, providing redundant application services to end-users. If the a virtual machine fails, other virtual machines can take over, if the failure is on the Windows host machine then they will all fail. The virtual cluster can be maintained across multiple machines, allowing multiple users to have the reliability of clusters on fewer machines.
Let me know what you think.
Brobson 18:25, 14 October 2010 (UTC)

hot swapping

Nshires 16:47, 13 October 2010 (UTC) Another useful feature that mainframes have is the ability to hot-swap. Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe and technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors inside the mainframe. With the right software and setup (redundancy) a mainframe is able to upgrade and/or repair their mainframe as they see fit. Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular.

These are the concepts I've been able to figure out so far about hot-swapping/hot-upgrading, feel free to add/edit and what-not!

Sources: http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631,00.html http://www.jungo.com/st/hotswap_windows.html Nshires 16:47, 13 October 2010 (UTC)

According to your searchvmware.techtarget.com source, a processor cannot be hot-plugged in the truest sense of the word in that the hardware needs to be rebooted to recognize the added hardware. Hot-swapping demands zero downtime.
If you don't mind me suggesting but I don't think this section should be referring to the hot-swapping/hot-adding/or hot-plugging of virtual machines or client machines of the mainframe. I think for hot-swapping we should focus on the hot-swapping of hardware components. As such we can point out that Windows does support mainframe-level hot-swapping with its Windows Server 2008 R2 Datacenter OS

"Hot Add/Replace Memory and Processors with supporting hardware"

http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx
If we are only consider the capabilities of the PC OS, then Windows only supports plug and play devices, such as external hard drives, and does not support RAM or CPU hot-swap.
I'm also wondering if this should tie into scalability of a mainframe or if scalability should have it's own section.
Brobson 17:12, 14 October 2010 (UTC)

The source you mentioned talks about a virtual machine and that it can be hot-swapped with no downtime depending on the guest OS with no downtime. Some guest OS's need a reboot but some do not. The virtual Windows Server 2008 ENT x64 can hot-add memory with no downtime, it seems that no virtual os can hot-add cpu without rebooting. And the second part of my paragraph talks about physical Windows systems coupled with a program that enables hot-swapping of SATA hard drives and other components with no downtime. I do agree that hot-swapping in a virtual machine may be kind of useless though haha :S. And I'll check out the Windows Server 2008 R2 Datacenter OS, Thanks Nshires 00:33, 15 October 2010 (UTC)

Revised: A useful feature that mainframes have is the ability to hot-swap. Hot-swapping is the ability to swap out components of a computer/mainframe for new components with no downtime (i.e. the system continues to run through this process). Hot-swapping occurs when there is faulty hardware in one of the processors inside the mainframe. Technicians are able to swap out this component without the mainframe being turned off or crashing. Hot-swapping is also used when upgrading processors, memory and storage inside the mainframe. With the right software and setup (redundancy) a mainframe is able to be upgraded and/or repaired as is sees fit by adding and removing components such as hard drives and processors.

Using VMWare on a Windows system allows users to hot-add RAM and hot-plug adds a new virtual CPU to the virtualized system. Using these hot-adding and hot-plugging techniques the virtual computer can grow in size to be able to accept loads varying in size. Due to some circumstances with different CPU's and guest OS's, the virtual machine may have to restart and is unable to hot-add/hot-plug. For example, the virtual machine of a Windows Server 2008 ENT x64 OS allows you to hot-add memory but you must restart it to remove memory and to add/remove CPU.

In non-virtual systems, Windows coupled with the program Go-HotSwap can hot-plug CompactPCI components. CompactPCI components allow many different devices to be plugged into their slots (e.g. multiple SATA hard drives) which makes a Windows system with these technologies very modular. The Windows Server 2008 R2 Datacenter released in 2009 uses dynamic hardware partitioning. Dynamic hardware partitioning means that its hardware can be partitioned into separate hardware with processors and other components which allows for hot-swapping/hot-adding of these partitions where needed.

Davis, David. "VMware vSphere hot-add RAM and hot-plug CPU." TechTarget. N.p., 09.15.2009. Web. 14 Oct 2010. <http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1367631_mem1,00.html>.

"Windows Server 2008 R2 Datacenter." Windows Server 2008 R2. N.p., n.d. Web. 14 Oct 2010. <http://www.microsoft.com/windowsserver2008/en/us/2008-dc.aspx>.

"Go-HotSwap: CompactPCI Hot Swap." Jungo. Jungo Ltd, n.d. Web. 14 Oct 2010. <http://www.jungo.com/st/hotswap.html>.

feel free to edit Nshires 03:49, 15 October 2010 (UTC)

backwards-compatibility

Backwards-compatibility means that the newer software version can recognize what the old version write and how it work. It is a relationship between the two versions. If the new components provide all the functionality of the old one, we said that the new component is backwards compatible.In computer mainframe era, many applications are backwards compatible.For example,the code written 20 years ago in IBM System/360 can be run in latest mainframe (like zSeries, System/390 family,System z9,etc).This because that models in mainframe computer provide a combination of special hardware,special microcode and an emulation program to simulate the target system.(The IBM 7080 transistorized computer was backward compatible with all models of the IBM 705 vacuum tube computer.) Sometimes mainframe also need customers to halt the computer and download the emulation program.

In Windows OS,one method to implement backwards-compatibility is to add applications.Like Microsoft Windows Application Compatibility Toolkit.This application can make the platfrom to be compatible with most softwares from early version.The second method is the Windows Operating Systems usually have various subsystems.The software originally designed for older version or other OSs can be run in the subsystems.Such as Window NT, it has MS-DOS and Win16 subsystems.But Windows 7's backwards-compatibility is not very good.If kernel is different, the OSs can't be compatible with each other.But it doesn't mean that older programs won't run, virtualization will be used to make them run.The third method is to use shims to create the backwards-compatibility.Shims are just like the small libraries that can intercept the API, change parameters passed,handle and redirect the operations. In Windows OS,we can use shims to simulate the behaviors of old version OS for legacy softwares.

--Zhangqi 08:34, 13 October 2010 (UTC)

ps. I didn't find perfect resources,just these.If you guys think any opinion is not correct,plz edit it or give suggestions :)

http://www.windows7news.com/2008/05/23/windows-7-to-break-backwards-compatibility/

http://computersight.com/computers/mainframe-computers/


Hey, this sounds really good, I'd add an example where you say 'one method to implement backward-compatibility is to add applications'. And I did a little research and I found another way to create backwards compatibility using shims: http://en.wikipedia.org/wiki/Shim_%28computing%29 it pretty much intercepts the calls and changes them so that the old program can run on a new system. Good Work, Nshires 16:56, 13 October 2010 (UTC)

Thanks for your suggetions.I have added some information to the paragraph.:) --Zhangqi 00:24, 14 October 2010 (UTC)

High input/output

~Andrew Bown (October 13 2:08) I'll write this paragraph. I don't have time to write this before work(12-5) but I can put out the information i got already with research so if someone could help me complete this that it would be awesome since I have to finish up my 3004 document as well tonight. ~[User:Abown|Andrew Bown] (October 14th 11:12am) Mainframes are able to achieve high/input output rates with their specialized Message Passing Interfaces (MPIs) which allow for fast intercommunication by sharing memory in between the different cores.https://www.mpitech.com/mpitech.nsf/pages/mainframe-&-AS400-printing_en.html

The latest versions of Windows clusters support a Microsoft created MPI surprisingly called Microsoft MPI[4].

Microsoft's MPI is based off the MPICH2 explanation here:http://www.springerlink.com/content/hc4nyva6dvg6vdpp/


Looking at the details of the Microsoft MPI only runs if a process is put into the Microsoft Job Scheduler. So we may want to combine input/ouput and throughtput.


Hey guys.According to the resources above, the method for Windows to provide high input/output and massive throughput is almost the same. But I have no idea how to combine the two sections. Do we need to write somthing about input/output or just consider it under the massive throughput? Zhangqi 22:38, 14 October 2010 (UTC)

Massive Throughput

Achamney 01:09, 14 October 2010 (UTC)

Achamney 21:18, 14 October 2010 (UTC) Done for now, I will come back to this after i get back (after 10:00pm tonight ish) and fix up the flow and such

Throughput, unlike input and output, is the measurement of the number of calculations per second that a machine can preform. This is usually measured in FLOPS (floating point logical operations per second). It is impossible for one sole Windows machine to compete with a mainframe's throughput. Not only do mainframe processors have extremely high frequencies, but they also have a considerable amount of cores. This all changes, however, when computer clustering is introduced. In the recent years, IBM has constructed a clustered system called The Roadrunner that ranks third in the TOP500 supercomputer list as of June 2010.[5] It has a total of 60 connected units, over a thousand processors, and the capability of computing at a rate of 1.7 petaflops.

The question is, with such complex hardware, how is it possible for any sort of software to use this clustered system? Luckily, Windows has introduced an OS called Windows Compute Cluster Server, which provides the necessary software to allow the main computer to utilize the computing power of its cluster nodes. Windows mainly uses MS-MPI (Microsoft Message Passing Interface) to send messages via Ethernet to its other nodes.[6] Developers can use this function because it automatically connects a given process to each node. Windows then can use its scheduler to determine which node receives each different job. It keeps track of each node, and shuts the job down once the output is received.

Today, clustering computers together with the intent of optimizing throughput is accomplished using grid computing. Grid computing shares the same basic idealisms of cluster computing, however, grids have the sole job of computing massive scale problems.[7] Each subsection of a problem is passed out to a compute node in the grid for it to be calculated. The one clear problem of this computational model is that the problem must have the ability to be broken down into several pieces for each compute node to work on. This style of high throughput computing can be used for problems such as high-energy physics, or biology models.

In general, however, the most popular solution to solve problems that require large throughput would be to construct a cluster model. Most businesses require the reliability of clusters, even though it sacrifices performance; there is no competition to the hight availability of a cluster server as compared to the grid model.[8]



[9] [10]