Basic Synchronization Principles: Difference between revisions
m rewording to avoid confusion |
added part of Tackling Mutual Exclusion |
||
Line 19: | Line 19: | ||
==Tackling Mutual Exclusion== | ==Tackling Mutual Exclusion== | ||
'''NOTE: I'll add the rest of the notes | ===A Broken Attempt=== | ||
<code><pre> | |||
linked_list list; | |||
int using_list = 0; | |||
void use_list() { | |||
while (using_list) { | |||
// Just waiting here, so nothing goes in the loop | |||
// Could sleep, but this is a broken attempt anyway | |||
} | |||
using_list = 1; | |||
// use the list | |||
// this part of the code is known as a critical region | |||
using_list = 0; | |||
} | |||
</pre></code> | |||
What's the problem? | |||
''TOCTOU'': Time Of Check to Time Of Use | |||
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero. | |||
===Fixing the Broken Attempt=== | |||
''Atomic operation'': an operation that cannot be “split apart” | |||
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware. As such, special machine instructions are needed to handle this. | |||
====Single-Processor System==== | |||
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list. | |||
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts. | |||
<code><pre> | |||
linked_list list; | |||
int using_list = 0; | |||
void use_list() { | |||
asm { // disable interrupts | |||
cli | |||
} | |||
if (using_list) { | |||
asm { // enable interrupts | |||
sti | |||
} | |||
sleep(0); | |||
} | |||
else { | |||
using_list = 1; | |||
} | |||
asm { // enable interrupts | |||
sti | |||
} | |||
// use the list | |||
using_list = 0; | |||
} | |||
</pre></code> | |||
'''NOTE: I'll add the rest of the notes later today.''' |
Revision as of 15:18, 4 October 2007
This article discusses approaches and issues involved with managing concurrency. The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.
Three Concurrency Problems to Address
Synchronization
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks. An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner. This more complex case is related to the Producer-Consumer pattern discussed below.
Mutual Exclusion
Mutual exclusion is controlling access to shared resources. Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly. Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.
Data Sharing
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices. These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action). If this was done with all world data processing done on the server, a player wouldn't know whether or not an opponent had been defeated until the network responds with the result. With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its "official version" of events.
Message Passing is this way of managing duplication of state via coordinated communication between copies. Shared Memory is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.
Tackling Mutual Exclusion
A Broken Attempt
linked_list list;
int using_list = 0;
void use_list() {
while (using_list) {
// Just waiting here, so nothing goes in the loop
// Could sleep, but this is a broken attempt anyway
}
using_list = 1;
// use the list
// this part of the code is known as a critical region
using_list = 0;
}
What's the problem?
TOCTOU: Time Of Check to Time Of Use
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.
Fixing the Broken Attempt
Atomic operation: an operation that cannot be “split apart”
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware. As such, special machine instructions are needed to handle this.
Single-Processor System
- One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.
- Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.
linked_list list;
int using_list = 0;
void use_list() {
asm { // disable interrupts
cli
}
if (using_list) {
asm { // enable interrupts
sti
}
sleep(0);
}
else {
using_list = 1;
}
asm { // enable interrupts
sti
}
// use the list
using_list = 0;
}
NOTE: I'll add the rest of the notes later today.