Talk:COMP 3000 Essay 1 2010 Question 7: Difference between revisions
No edit summary |
|||
(198 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
== To Do == | |||
# Grab your references for the Essay proper, set your info to refer to the references, leave out any references we didn't use. | |||
# Remove signatures from the Essay Proper by 10:00 (this is an arbitrary time) | |||
== Log == | |||
'''Suggestion:''' Let us maintain our edits here instead of on littering the main page with our names. Also please do not edit without writing to the log so that we know who has done what and when. | |||
Please maintain a log of your activities in the Log Section. So that we can keep track of the evolution of the essay. --[[User:Gautam|Gautam]] | |||
Moved around some info for clarity. Everyone should post your interpretation of the question in simplest possible English so we`re on the same page (as someone, maybe me, seems to have the wrong idea about what we`re trying to talk about) | |||
More moving for clarity. added an essay outline at bottom (feel free to change) | |||
filled in the outline somewhat added questions to the outline for everyone to think on.--[[User:Rannath|Rannath]] | |||
First Draft for essay. Please modify and add on. --[[User:Gautam|Gautam]] 02:46, 13 October 2010 (UTC) | |||
Edited Scheduling Priorities and rewrote some areas to provide a better paragraph structure. --[[User:Spanke|Shane]] 15:25, 13 October 2010 (UTC) | |||
Added to the memory management section. --[[User:Hirving|Hirving]] 21:42, 13 October 2010 (UTC) | |||
Edited Scalable Threads Problems. Also did a little re-arrangement. --[[User:Gautam|Gautam]] 01:03, 14 October 2010 (UTC) | |||
Answered Essay Questions in Discussion. --[[User:Spanke|Shane]] 01:25, 13 October 2010 (UTC) | |||
I posted Main point 2. It is nearing completion, --[[User:Praubic|Praubic]] 17:43, 14 October 2010 (UTC) | |||
Added introduction and edited design and models [[vG]] | |||
Minor edits in Scheduler part. --[[User:Gautam|Gautam]] 19:09, 14 October 2010 (UTC) | |||
Added a paragraph about locks to memory section. --[[User:Hirving|Hirving]] 19:36, 14 October 2010 (UTC) | |||
Proof read and edited article for clarity and grammar. (commas are nice) --[[User:Hirving|Hirving]] 19:57, 14 October 2010 (UTC) | |||
Proof read once more. Seems good to go. And yes commas are nice :) --[[User:Gautam|Gautam]] 07:46, 15 October 2010 (UTC) | |||
<br><Add your future activities here> | |||
== The Question == | |||
'''Original:<br>''' | |||
How is it possible for systems to supports millions of threads or more within a single process? What are the key design choices that make such systems work - and how do those choices affect the utility of such massively scalable thread implementations? | |||
'''Rannath:<br>''' | |||
The question seems to be about number and scalability of threads not the gross mechanics. | |||
To be more clear: we can limit ourselves from the thread implementations to the thread scalability... ignore the stuff that required for all threads, unless its required for many threads. (I didn't find any implementations that required hardware) | |||
I would also argue that since OSs have to run on multiple hardwares one cannot guarantee that unique/rare hardware bits will be there. While we can talk about hardware we should limit it to a mention at most. OR we could mention prospective hardware that could help out, but is not yet standard. It depends on whether we want to do "as it is" or "as it might be" | |||
utility of such massively scalable thread implementations. I took this as: what functionality (of single strings) does one have to give up to make threads scalable. | |||
'''Gautam:<br>''' | |||
I think the hardware is as relevant as the software. Not all things can be done in software and hardware support is an important factor in most of the solutions to many problems that OS face. My take. | |||
'''Henry:<br>''' | |||
Since the question is about the system as a whole, I think the answer should include both software and hardware support for large amounts of threads. The questions revolves around how a system can handle millions of threads and what are the major factors that allow the system to do it. Also, the last part of the question seems to ask what this amount of threads allows a process to do. | |||
'''Shane:<br>''' | |||
In response to the above's idea on the last part of the question, I would argue that it would enable fast execution because all threads that receive a cache miss would be picked up by the other threads so long as there was enough resources. Also the use of more threads would help synchronize the cache (through sharing) so that it would not miss. Of course this would be if they were assigned to the same task, you cannot sync threads running different applications it just wouldn't make sense. The only issue with this idea is the software must support this number. | |||
'''vG:<br>''' | |||
We should talk about type of relationship models (1:1 N:M N:N and so on) also talk about the application vs hardware multi-threading within single processor. | |||
'''Paul:<br>''' | |||
I discussed Main Point 2 and how UMS threading is stretched onto multiple cores. Design that involves multiple processors differs from single proc comps so hardware definitely plays significant role here. | |||
== Group 7 == | == Group 7 == | ||
Line 7: | Line 76: | ||
Patrick Young(rannath) <rannath@gmail.com> | Patrick Young(rannath) <rannath@gmail.com> | ||
vG Vivek <support.tamiltreasure@gmail.com> | |||
So far | Shane Panke <shanepanke@msn.com> | ||
Three design choices I | |||
Henry Irving <sens.henry@gmail.com> | |||
Paul Raubic <paul_raubic@hotmail.com> | |||
== Guidelines == | |||
Raw info should have some indication of where you got it for citation. | |||
Claim your info so we don't need to dig for who got what when we need clarification. | |||
Feel free to provide info for or edit someone else's info, just keep their signature so we can discuss changes | |||
sign changes (once) preferably without time stamps Ex: --[[User:Rannath|Rannath]] | |||
Please maintain a log of your activities in the Log Section. So that we can keep track of the evolution of the essay. --[[User:Gautam|Gautam]] | |||
== Facts We have == | |||
Start by placing the info here so we can sort through it. I'm going to go into full research/essay writing mode on Sunday if there isn't enough here. | |||
So far we have: | |||
Three design choices I've seen: | |||
# Smallest possible footprint per-thread (being extremely light weight) - from everywhere | # Smallest possible footprint per-thread (being extremely light weight) - from everywhere | ||
# least number (none if at all possible) of context switches per-thread - | # least number (none if at all possible) of context switches per-thread - ''5'' | ||
# use of a "thread pool" - | # use of a "thread pool" - ''3'' | ||
The idea is to reduce processor time and storage needed per-thread so you can have more in the same amount of space. --[[User:Rannath|Rannath]] | |||
Multi-threading is a term used to describe: | |||
== | * A facility provided by the operating system that enables an application to create threads of execution within a process | ||
* Applications whose architecture takes advantage of the multi-threading provided by the operating system | |||
[[vG]] | |||
---- | |||
These are all related ideas. | |||
Ok, since we are discussing design choices maybe we could also elaborate on the two major types of threads. Here, I already wrote a few lines, source can be found in citation section: | |||
''Fibers (user mode threads) provide very quick and efficient switching because there is no need for a system call and kernel is oblivious to a switch - allows for millions of user mode threads. ISSUES: Blocking system calls disables all other fibers. | |||
On the other hand managing threads through the kernel requires context switch (between user and kernel mode) on creation and removal of a thread therefore programs with prodigious number of threads would suffer huge performance hits.--[[User:Praubic|Praubic]] 18:05, 10 October 2010 (UTC)'' | |||
User-mode scheduling (UMS) is a light-weight mechanism that applications can use to schedule their own threads. The ability to switch between threads in user mode makes UMS more efficient than thread pools for short-duration work items that require few system calls. [[Paul]] | |||
One implementation of UMS is: combination of N:N and N:M, where the N:N relationship reveals N false processors to the user-space so the user can deal with scheduling on their own. ''5'' -[[Rannath]] | |||
---- | |||
I would scrap the first two below, at most mention them... | |||
#time-division multiplexing | |||
#threads vs processes | |||
#I/O Scheduling -[[vG]] | |||
Splitting this off because I don't think it's technically part of the answer<br> | |||
Multithreading generally occurs by time-division multiplexing. It makes it possible for the processor to switch between different threads but it happens so fast that the user sees it as it is running at the same time. [[User:vG]] | |||
---- | |||
Things that we '''need''' to cover in the essay:--[[User:Gautam|Gautam]] 19:35, 7 October 2010 (UTC)<br> | |||
This is a '''need''' section 4 below is not '''needed'''<br> | |||
(A)Design Decisions | |||
1. Type of threading (1:1 1:N M:N) | |||
2. Signal handling - we might be able to leave this out as it seems some "light weight" threads use no signals | |||
3. Synchronisation | |||
4. Memory Handling | |||
5. Scheduling Priorities (context switching and how it affects the CPU threading process)[[Paul]] | |||
---- | |||
Things we might want also to cover in the essay (non-essentials here): --[[User:Rannath|Rannath]] 04:43, 10 October 2010 (UTC)<br> | |||
(A)Design Decisions | |||
1. Brief History of threading | |||
2. examples of attempts at getting absurd numbers of threads (failures) | |||
3. other types of threading, including heavy weight and processes | |||
4. Examples of systems that require many threads such as mainframe servers or banking client processing.--[[User:Praubic|Praubic]] 17:34, 11 October 2010 (UTC) | |||
Here is an example of a design: (the topic asks for key design choices here is one) | |||
Capriccio is a specific design for scalable user level threads. They are distinct from most designs by being independent of event based mechanisms as well as kernel thread models. They are very good choice for internet servers and this implementations could easily support 100,000 threads. They are characterized by high scalability, efficient stack management and scheduling based on resource usage however the performance is not comparable to event-based systems.--[[User:Praubic|Praubic]] 13:32, 12 October 2010 (UTC) | |||
(B)Kernel | |||
1. Program Thread manipulation through system calls --[[User:Hirving|Hirving]] 20:05, 7 October 2010 (UTC) | |||
(C)Hardware --[[User:Hirving|Hirving]] 19:55, 7 October 2010 (UTC) | |||
1. Simultaneous Multithreading | |||
2. Multi-core processors | |||
== Essay Outline == | |||
#Thesis is an answer to the question so... that's the first step, or the last step, we can always present our info and make our thesis match the info. | |||
#List all questions and points we have about the topic | |||
Questions: | |||
# What makes threads non-scalable? List the problems | |||
# What utility do some scalable implementations lack? Why? | |||
# Just how scalable does a full utility implementation get? | |||
Answers: | |||
# Memory Usage, Context Switching. Consider using a thread pool. | |||
# Signals, portability(maybe) both add overhead which would slow down threads | |||
# If using thread pools, the scalability is then limited to the number of threads in the pool | |||
---- | |||
Intro (fill in info) | |||
# Thesis | |||
# main topics | |||
---- | |||
Body (made of many main points) | |||
Main Point 1 -[[Rannath]]<br> | |||
- efficient thread creation/destruction is more scalable<br> | |||
-- NPTL's improvements over LinuxThreads- primarily due to lower overhead of creation/destruction ''1'' | |||
Main Point 2 -[[Rannath]]<br> | |||
- UMS & user-space threads are more scalable - maybe<br> | |||
-- context switches are costly ''From class''<br> | |||
-- blocking locks have lower latency when twinned with a user space scheduler ''8'' | |||
Ok for point 2 -> I posted a draft on the essay page but Im not certain as to whether i should talk about fibers since they are also functioning on user space but theyre not UMS. --[[User:Praubic|Praubic]] 00:18, 14 October 2010 (UTC) | |||
Main Point 3<br> | |||
- Certain bottleneck appear in scaled implementations, removing these improves scalability.<br> | |||
-- "False cache-line sharing" ''14''<br> | |||
-- xtime lock to a lockless lock ''14'' | |||
Main Point 3.5<br> | |||
Fine-Grain over course-grain<br> | |||
-- "Big Kernel Lock" ''14''<br> | |||
-- dcache_lock ''14'' | |||
Link the Main points to the thesis | |||
---- | |||
Conclusion | |||
# restate info | |||
# affirmation of thesis | |||
Here is the first paragraph that I attempted. Please feel free to change or even delete it from here. | |||
A thread is an independent task that executes in the same address space as other threads within a single process while sharing data synchronously. Threads require less system resources then concurrent cooperating processes and start much easier therefore there may exist millions of them in a single process. The two major types of threads are kernel and user-mode. Kernel threads are usually considered more heavy and designs that involve them are not very scalable User threads on the other hand are mapped to kernel threads by the threads library such as libpthreads. and there are a few designs that incorporate it mainly Fibers and UMS (User Mode Scheudling) which allow for very high scalability. UMS threads have their own context and resources however the ability to switch in the user mode makes them more efficient (depending on application) than Thread Pools which are yet another mechanism that allows for high scalability. | |||
--[[User:Praubic|Praubic]] 19:04, 12 October 2010 (UTC) | |||
we can add this for intro paragraph: | |||
How is it possible for systems to supports millions of threads or more within a single process? | |||
It is possible for systems to supports millions of threads or more within a single processor, it has the ability to switch execution resource between threads, thus making a concurrent execution. Concurrency is when multiple threads stays on the ques for switching but incapable of running at the same time but it has the ability to make it look like they are running at same time due to the speed they switch. [[vG]] You stated it is possible you did not state how, or rather did not make it clear. The below should be a better interpretation. --[[User:Spanke|Shane]] | |||
Systems can support millions within a single process by switching execution resources between threads, creating a concurrent execution. Concurrency is the result of multiple threads staying on the queues but is incapable of running them at the same time. It provides the impression that they are executing at the same time due to the speed they switch at. | |||
Added more == vG | |||
Process is known as an instance of a program running in a computer which has its own resources such as address space, files, I/O devices and threads on the other hand thread is similar to a process but it but it does a single operation within the process. Systems can support millions within a single process by switching execution resources between threads, creating a concurrent execution. Concurrency is the result of multiple threads staying on the queues but is incapable of running them at the same time. It provides the impression that they are executing at the same time due to the speed they switch at. [[vG]] | |||
---- | |||
I suggest that we start filling out the main points of the essay. We can discuss the intricacies as we go along. --[[User:Gautam|Gautam]] 02:46, 13 October 2010 (UTC) | |||
--[[User: | |||
== Sources == | |||
[[User:Gbint|Gbint]] 19:50, 5 October 2010 (UTC) Not in this group, but I thought that this paper was excellent: http://www.sandia.gov/~rcmurph/doc/qt_paper.pdf | # Short history of threads in Linux and new implementation of them. [http://www.drdobbs.com/open-source/184406204;jsessionid=3MRSO5YMO1QVRQE1GHRSKHWATMY32JVN NPTL: The New Implementation of Threads for Linux ] [[User:Gautam|Gautam]] 22:18, 5 October 2010 (UTC) | ||
# This paper discusses the design choices [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.6590&rep=rep1&type=pdf Native POSIX Threads] [[User:Gautam|Gautam]] 22:11, 5 October 2010 (UTC) | |||
# lightweight threads vs kernel threads [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.9043&rep=rep1&type=pdf PicoThreads: Lightweight Threads in Java] --[[User:Rannath|Rannath]] 00:23, 6 October 2010 (UTC) | |||
# [http://eigenclass.org/http://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_7&action=edit§ion=7hiki/lightweight-threads-with-lwt Eigenclass Comparing lightweight threads] --[[User:Rannath|Rannath]] 00:23, 6 October 2010 (UTC) | |||
# A lightwight thread implementation for Unix [http://www.usenix.org/publications/library/proceedings/sa92/stein.pdf Implementing light weight threads] --[[User:Rannath|Rannath]] 00:49, 6 October 2010 (UTC) [[User:Gbint|Gbint]] 19:50, 5 October 2010 (UTC) | |||
#Not in this group, but I thought that this paper was excellent: [http://www.sandia.gov/~rcmurph/doc/qt_paper.pdf Qthreads: An API for Programming with Millions of Lightweight Threads] | |||
# Difference between single and multi threading [http://wiki.answers.com/Q/Single_threaded_Process_and_Multi-threaded_Process] [[vG]] | |||
# [http://hdl.handle.net/1853/6804 Implementation of Scalable Blocking Locks using an Adaptative Thread Scheduler] --[[User:Gautam|Gautam]] 19:35, 7 October 2010 (UTC) | |||
# Research Group working on Simultaneous Multithreading [http://www.cs.washington.edu/research/smt/ Simultaneous Multithreading] --[[User:Hirving|Hirving]] 19:58, 7 October 2010 (UTC) | |||
# This site provides in-depth info about threads, threads-pooling, scheduling: http://msdn.microsoft.com/en-us/library/ms684841(VS.85).aspx [[Paul]] | |||
# Here is another site that outlines THREAD designs and techniques: http://people.csail.mit.edu/rinard/osnotes/h2.html [[Paul]] | |||
# [http://www.cosc.brocku.ca/Offerings/4P13/slides/threads.ppt Interesting presentation: really worth checking out] [[Paul]] | |||
# KERNEL vs USERMODE http://www.wordiq.com/definition/Thread_(computer_science)--[[User:Praubic|Praubic]] 18:06, 10 October 2010 (UTC) | |||
# [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.7621&rep=rep1&type=pdf#page=83 Scalability in linux] | |||
# [http://hillside.net/plop/2007/papers/PLoP2007_Ahluwalia.pdf This has something to do with our question...] | |||
# [http://msdn.microsoft.com/en-us/library/ms685100%28VS.85%29.aspx Scheduling Priorities (Windows)], Microsoft (23 September 2010) --[[User:Spanke|Shane]] | |||
# [http://www.novell.com/coolsolutions/feature/14878.html Linux Scheduling Priorities Explained], Novell (11 October 2005) --[[User:Spanke|Shane]] | |||
# [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/ Inside the Linux 2.6 Completely Fair Scheduler], IBM (15 December 2009) --[[User:Spanke|Shane]] | |||
#http://www.megaupload.com/?d=R4VMK3A1 (PDF Document on Multithreading) [[vG]] | |||
# [http://www.linuxjournal.com/article/1363 what is multithreading?] [[vG]] | |||
# [http://en.wikipedia.org/wiki/Thread_%28computer_science%29 type of threadings and multithreading in general] [[vG]] | |||
#On the design of Chant: a talking threads package [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=344298 http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=344298]<br> |
Latest revision as of 13:35, 15 October 2010
To Do
- Grab your references for the Essay proper, set your info to refer to the references, leave out any references we didn't use.
- Remove signatures from the Essay Proper by 10:00 (this is an arbitrary time)
Log
Suggestion: Let us maintain our edits here instead of on littering the main page with our names. Also please do not edit without writing to the log so that we know who has done what and when.
Please maintain a log of your activities in the Log Section. So that we can keep track of the evolution of the essay. --Gautam
Moved around some info for clarity. Everyone should post your interpretation of the question in simplest possible English so we`re on the same page (as someone, maybe me, seems to have the wrong idea about what we`re trying to talk about) More moving for clarity. added an essay outline at bottom (feel free to change) filled in the outline somewhat added questions to the outline for everyone to think on.--Rannath
First Draft for essay. Please modify and add on. --Gautam 02:46, 13 October 2010 (UTC)
Edited Scheduling Priorities and rewrote some areas to provide a better paragraph structure. --Shane 15:25, 13 October 2010 (UTC)
Added to the memory management section. --Hirving 21:42, 13 October 2010 (UTC)
Edited Scalable Threads Problems. Also did a little re-arrangement. --Gautam 01:03, 14 October 2010 (UTC)
Answered Essay Questions in Discussion. --Shane 01:25, 13 October 2010 (UTC)
I posted Main point 2. It is nearing completion, --Praubic 17:43, 14 October 2010 (UTC)
Added introduction and edited design and models vG
Minor edits in Scheduler part. --Gautam 19:09, 14 October 2010 (UTC)
Added a paragraph about locks to memory section. --Hirving 19:36, 14 October 2010 (UTC)
Proof read and edited article for clarity and grammar. (commas are nice) --Hirving 19:57, 14 October 2010 (UTC)
Proof read once more. Seems good to go. And yes commas are nice :) --Gautam 07:46, 15 October 2010 (UTC)
<Add your future activities here>
The Question
Original:
How is it possible for systems to supports millions of threads or more within a single process? What are the key design choices that make such systems work - and how do those choices affect the utility of such massively scalable thread implementations?
Rannath:
The question seems to be about number and scalability of threads not the gross mechanics.
To be more clear: we can limit ourselves from the thread implementations to the thread scalability... ignore the stuff that required for all threads, unless its required for many threads. (I didn't find any implementations that required hardware)
I would also argue that since OSs have to run on multiple hardwares one cannot guarantee that unique/rare hardware bits will be there. While we can talk about hardware we should limit it to a mention at most. OR we could mention prospective hardware that could help out, but is not yet standard. It depends on whether we want to do "as it is" or "as it might be"
utility of such massively scalable thread implementations. I took this as: what functionality (of single strings) does one have to give up to make threads scalable.
Gautam:
I think the hardware is as relevant as the software. Not all things can be done in software and hardware support is an important factor in most of the solutions to many problems that OS face. My take.
Henry:
Since the question is about the system as a whole, I think the answer should include both software and hardware support for large amounts of threads. The questions revolves around how a system can handle millions of threads and what are the major factors that allow the system to do it. Also, the last part of the question seems to ask what this amount of threads allows a process to do.
Shane:
In response to the above's idea on the last part of the question, I would argue that it would enable fast execution because all threads that receive a cache miss would be picked up by the other threads so long as there was enough resources. Also the use of more threads would help synchronize the cache (through sharing) so that it would not miss. Of course this would be if they were assigned to the same task, you cannot sync threads running different applications it just wouldn't make sense. The only issue with this idea is the software must support this number.
vG:
We should talk about type of relationship models (1:1 N:M N:N and so on) also talk about the application vs hardware multi-threading within single processor.
Paul:
I discussed Main Point 2 and how UMS threading is stretched onto multiple cores. Design that involves multiple processors differs from single proc comps so hardware definitely plays significant role here.
Group 7
Let us start out by listing down our names and email id (preffered).
Gautam Akiwate <gautam.akiwate@gmail.com>
Patrick Young(rannath) <rannath@gmail.com>
vG Vivek <support.tamiltreasure@gmail.com>
Shane Panke <shanepanke@msn.com>
Henry Irving <sens.henry@gmail.com>
Paul Raubic <paul_raubic@hotmail.com>
Guidelines
Raw info should have some indication of where you got it for citation.
Claim your info so we don't need to dig for who got what when we need clarification.
Feel free to provide info for or edit someone else's info, just keep their signature so we can discuss changes
sign changes (once) preferably without time stamps Ex: --Rannath
Please maintain a log of your activities in the Log Section. So that we can keep track of the evolution of the essay. --Gautam
Facts We have
Start by placing the info here so we can sort through it. I'm going to go into full research/essay writing mode on Sunday if there isn't enough here.
So far we have: Three design choices I've seen:
- Smallest possible footprint per-thread (being extremely light weight) - from everywhere
- least number (none if at all possible) of context switches per-thread - 5
- use of a "thread pool" - 3
The idea is to reduce processor time and storage needed per-thread so you can have more in the same amount of space. --Rannath
Multi-threading is a term used to describe:
- A facility provided by the operating system that enables an application to create threads of execution within a process
- Applications whose architecture takes advantage of the multi-threading provided by the operating system
These are all related ideas.
Ok, since we are discussing design choices maybe we could also elaborate on the two major types of threads. Here, I already wrote a few lines, source can be found in citation section:
Fibers (user mode threads) provide very quick and efficient switching because there is no need for a system call and kernel is oblivious to a switch - allows for millions of user mode threads. ISSUES: Blocking system calls disables all other fibers. On the other hand managing threads through the kernel requires context switch (between user and kernel mode) on creation and removal of a thread therefore programs with prodigious number of threads would suffer huge performance hits.--Praubic 18:05, 10 October 2010 (UTC)
User-mode scheduling (UMS) is a light-weight mechanism that applications can use to schedule their own threads. The ability to switch between threads in user mode makes UMS more efficient than thread pools for short-duration work items that require few system calls. Paul
One implementation of UMS is: combination of N:N and N:M, where the N:N relationship reveals N false processors to the user-space so the user can deal with scheduling on their own. 5 -Rannath
I would scrap the first two below, at most mention them...
- time-division multiplexing
- threads vs processes
- I/O Scheduling -vG
Splitting this off because I don't think it's technically part of the answer
Multithreading generally occurs by time-division multiplexing. It makes it possible for the processor to switch between different threads but it happens so fast that the user sees it as it is running at the same time. User:vG
Things that we need to cover in the essay:--Gautam 19:35, 7 October 2010 (UTC)
This is a need section 4 below is not needed
(A)Design Decisions
1. Type of threading (1:1 1:N M:N) 2. Signal handling - we might be able to leave this out as it seems some "light weight" threads use no signals 3. Synchronisation 4. Memory Handling 5. Scheduling Priorities (context switching and how it affects the CPU threading process)Paul
Things we might want also to cover in the essay (non-essentials here): --Rannath 04:43, 10 October 2010 (UTC)
(A)Design Decisions
1. Brief History of threading 2. examples of attempts at getting absurd numbers of threads (failures) 3. other types of threading, including heavy weight and processes 4. Examples of systems that require many threads such as mainframe servers or banking client processing.--Praubic 17:34, 11 October 2010 (UTC)
Here is an example of a design: (the topic asks for key design choices here is one)
Capriccio is a specific design for scalable user level threads. They are distinct from most designs by being independent of event based mechanisms as well as kernel thread models. They are very good choice for internet servers and this implementations could easily support 100,000 threads. They are characterized by high scalability, efficient stack management and scheduling based on resource usage however the performance is not comparable to event-based systems.--Praubic 13:32, 12 October 2010 (UTC)
(B)Kernel
1. Program Thread manipulation through system calls --Hirving 20:05, 7 October 2010 (UTC)
(C)Hardware --Hirving 19:55, 7 October 2010 (UTC)
1. Simultaneous Multithreading 2. Multi-core processors
Essay Outline
- Thesis is an answer to the question so... that's the first step, or the last step, we can always present our info and make our thesis match the info.
- List all questions and points we have about the topic
Questions:
- What makes threads non-scalable? List the problems
- What utility do some scalable implementations lack? Why?
- Just how scalable does a full utility implementation get?
Answers:
- Memory Usage, Context Switching. Consider using a thread pool.
- Signals, portability(maybe) both add overhead which would slow down threads
- If using thread pools, the scalability is then limited to the number of threads in the pool
Intro (fill in info)
- Thesis
- main topics
Body (made of many main points)
Main Point 1 -Rannath
- efficient thread creation/destruction is more scalable
-- NPTL's improvements over LinuxThreads- primarily due to lower overhead of creation/destruction 1
Main Point 2 -Rannath
- UMS & user-space threads are more scalable - maybe
-- context switches are costly From class
-- blocking locks have lower latency when twinned with a user space scheduler 8
Ok for point 2 -> I posted a draft on the essay page but Im not certain as to whether i should talk about fibers since they are also functioning on user space but theyre not UMS. --Praubic 00:18, 14 October 2010 (UTC)
Main Point 3
- Certain bottleneck appear in scaled implementations, removing these improves scalability.
-- "False cache-line sharing" 14
-- xtime lock to a lockless lock 14
Main Point 3.5
Fine-Grain over course-grain
-- "Big Kernel Lock" 14
-- dcache_lock 14
Link the Main points to the thesis
Conclusion
- restate info
- affirmation of thesis
Here is the first paragraph that I attempted. Please feel free to change or even delete it from here.
A thread is an independent task that executes in the same address space as other threads within a single process while sharing data synchronously. Threads require less system resources then concurrent cooperating processes and start much easier therefore there may exist millions of them in a single process. The two major types of threads are kernel and user-mode. Kernel threads are usually considered more heavy and designs that involve them are not very scalable User threads on the other hand are mapped to kernel threads by the threads library such as libpthreads. and there are a few designs that incorporate it mainly Fibers and UMS (User Mode Scheudling) which allow for very high scalability. UMS threads have their own context and resources however the ability to switch in the user mode makes them more efficient (depending on application) than Thread Pools which are yet another mechanism that allows for high scalability. --Praubic 19:04, 12 October 2010 (UTC)
we can add this for intro paragraph:
How is it possible for systems to supports millions of threads or more within a single process?
It is possible for systems to supports millions of threads or more within a single processor, it has the ability to switch execution resource between threads, thus making a concurrent execution. Concurrency is when multiple threads stays on the ques for switching but incapable of running at the same time but it has the ability to make it look like they are running at same time due to the speed they switch. vG You stated it is possible you did not state how, or rather did not make it clear. The below should be a better interpretation. --Shane
Systems can support millions within a single process by switching execution resources between threads, creating a concurrent execution. Concurrency is the result of multiple threads staying on the queues but is incapable of running them at the same time. It provides the impression that they are executing at the same time due to the speed they switch at.
Added more == vG
Process is known as an instance of a program running in a computer which has its own resources such as address space, files, I/O devices and threads on the other hand thread is similar to a process but it but it does a single operation within the process. Systems can support millions within a single process by switching execution resources between threads, creating a concurrent execution. Concurrency is the result of multiple threads staying on the queues but is incapable of running them at the same time. It provides the impression that they are executing at the same time due to the speed they switch at. vG
I suggest that we start filling out the main points of the essay. We can discuss the intricacies as we go along. --Gautam 02:46, 13 October 2010 (UTC)
Sources
- Short history of threads in Linux and new implementation of them. NPTL: The New Implementation of Threads for Linux Gautam 22:18, 5 October 2010 (UTC)
- This paper discusses the design choices Native POSIX Threads Gautam 22:11, 5 October 2010 (UTC)
- lightweight threads vs kernel threads PicoThreads: Lightweight Threads in Java --Rannath 00:23, 6 October 2010 (UTC)
- Eigenclass Comparing lightweight threads --Rannath 00:23, 6 October 2010 (UTC)
- A lightwight thread implementation for Unix Implementing light weight threads --Rannath 00:49, 6 October 2010 (UTC) Gbint 19:50, 5 October 2010 (UTC)
- Not in this group, but I thought that this paper was excellent: Qthreads: An API for Programming with Millions of Lightweight Threads
- Difference between single and multi threading [1] vG
- Implementation of Scalable Blocking Locks using an Adaptative Thread Scheduler --Gautam 19:35, 7 October 2010 (UTC)
- Research Group working on Simultaneous Multithreading Simultaneous Multithreading --Hirving 19:58, 7 October 2010 (UTC)
- This site provides in-depth info about threads, threads-pooling, scheduling: http://msdn.microsoft.com/en-us/library/ms684841(VS.85).aspx Paul
- Here is another site that outlines THREAD designs and techniques: http://people.csail.mit.edu/rinard/osnotes/h2.html Paul
- Interesting presentation: really worth checking out Paul
- KERNEL vs USERMODE http://www.wordiq.com/definition/Thread_(computer_science)--Praubic 18:06, 10 October 2010 (UTC)
- Scalability in linux
- This has something to do with our question...
- Scheduling Priorities (Windows), Microsoft (23 September 2010) --Shane
- Linux Scheduling Priorities Explained, Novell (11 October 2005) --Shane
- Inside the Linux 2.6 Completely Fair Scheduler, IBM (15 December 2009) --Shane
- http://www.megaupload.com/?d=R4VMK3A1 (PDF Document on Multithreading) vG
- what is multithreading? vG
- type of threadings and multithreading in general vG
- On the design of Chant: a talking threads package http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=344298