<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ndickson</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ndickson"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Ndickson"/>
	<updated>2026-05-12T21:32:44Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:Google_OS&amp;diff=1972</id>
		<title>Talk:Google OS</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:Google_OS&amp;diff=1972"/>
		<updated>2008-11-05T19:15:08Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: Added questions given to Chubby group&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Group 2: Chubby==&lt;br /&gt;
Questions from Group 1, GFS, and to-the-point answers&lt;br /&gt;
# What were some unexpected uses for Chubby?  as a DNS server&lt;br /&gt;
# How does Chubby recover if a client fails while holding a lock?  client won&#039;t send keep-alive, so Chubby releases lock&lt;br /&gt;
# How does Chubby&#039;s access control differ from UNIX files?  permissions don&#039;t depend on permissions of super-directories (uses Access Control List files in an ACL directory)&lt;br /&gt;
# How did they use Chubby&#039;s C++ code within Java projects?  special native RPC conversion layer, because Java doesn&#039;t support their RPC protocols&lt;br /&gt;
&lt;br /&gt;
Questions from Group 3, Bigtable&lt;br /&gt;
# What was Chubby designed for and did it meet expectations?&lt;br /&gt;
# Was coarse-grained locking a good idea and how did it impact applications?&lt;br /&gt;
# How is the master elected and is it elected consistently?&lt;br /&gt;
# How did Chubby integrate into other systems? Was it lightweight for applications?&lt;br /&gt;
&lt;br /&gt;
==Group 3: Bigtable==&lt;br /&gt;
&lt;br /&gt;
Questions from: Group 1, GFS&lt;br /&gt;
&lt;br /&gt;
   1. How is Bigtable like/unlike a relational database?&lt;br /&gt;
&lt;br /&gt;
   ans. Bigtable is unlike a relational database because:&lt;br /&gt;
              - It stores data in SSTables, which are not in proper relational form.&lt;br /&gt;
              - A tablet can store multiple versions of the same data based on timestamps.&lt;br /&gt;
              - The language used to query Bigtable does not support a full relational database functionality.&lt;br /&gt;
        Bigtable is like a relational database because:&lt;br /&gt;
              - Server side scripts can be used to filter results similar to some sql queries.&lt;br /&gt;
&lt;br /&gt;
   2. What is the role of SSTable in Bigtable?&lt;br /&gt;
   &lt;br /&gt;
   ans. It is a model for formating data.&lt;br /&gt;
&lt;br /&gt;
   3. For the webTable, why are domain names stored in reverse order?&lt;br /&gt;
&lt;br /&gt;
   ans. The domain names are stored in reverse order to increase efficiency of a query.&lt;br /&gt;
&lt;br /&gt;
   4. How did Bigtable use Chubby?&lt;br /&gt;
&lt;br /&gt;
   ans. The master server uses Chubby to track tablet servers. &lt;br /&gt;
&lt;br /&gt;
Questions from Group2, Chubby&lt;br /&gt;
&lt;br /&gt;
   1. Why did they not implement the full relational model?&lt;br /&gt;
&lt;br /&gt;
   ans. They did not need a full relational model. They only implemented what they thought they needed at the time.&lt;br /&gt;
&lt;br /&gt;
   2. When could major compaction occur? &lt;br /&gt;
&lt;br /&gt;
   ans. Major compaction could occur when two tables are the same, but stored differently, or when there are lots of gaps due to row deletions.&lt;br /&gt;
&lt;br /&gt;
   3. How does Bigtable handle fine-grained locking?&lt;br /&gt;
&lt;br /&gt;
   ans.&lt;br /&gt;
&lt;br /&gt;
   4. What are the similarities in the architecture between GFS and Bigtable?&lt;br /&gt;
&lt;br /&gt;
   ans. They are similar in master selection and the use of Chubby.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=NFS,_AFS,_Sprite_FS&amp;diff=1957</id>
		<title>NFS, AFS, Sprite FS</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=NFS,_AFS,_Sprite_FS&amp;diff=1957"/>
		<updated>2008-10-23T20:29:18Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: Finished posting notes on NFS/AFS/SpriteFS/DSM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/sandberg-nfs.pdf Russel Sandberg et al., &amp;quot;Design and Implementation of the Sun Network Filesystem&amp;quot; (1985)]&amp;lt;br /&amp;gt;This is the original NFS paper.&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/howard-afs.pdf John H. Howard et al., &amp;quot;Scale and Performance in a Distributed File System&amp;quot; (1988)]&amp;lt;br /&amp;gt;This paper describes AFS and compares it to NFS.&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/sprite-caching.pdf Nelson, Welch, &amp;amp; Ousterhout, &amp;quot;Caching in the Sprite Network File System&amp;quot; (1988)]&amp;lt;br /&amp;gt;This paper compares Sprite to AFS and NFS.&lt;br /&gt;
&lt;br /&gt;
==Questions==&lt;br /&gt;
&lt;br /&gt;
# What were the key design goals of NFS, AFS, and Sprite&#039;s FS?&lt;br /&gt;
# How well did they achieve their goals?&lt;br /&gt;
# What are their limitations?&lt;br /&gt;
# How suitable are these filesystems in modern small networks?  Enterprise networks?  Internet-scale applications?  Why?&lt;br /&gt;
&lt;br /&gt;
==Notes from this and the previous week==&lt;br /&gt;
Two main things to consider about new ideas for systems:&lt;br /&gt;
* mental models&lt;br /&gt;
* controls&lt;br /&gt;
&lt;br /&gt;
Are these solving some problem better than existing things, and are they better enough to overcome the disadvantages?&lt;br /&gt;
&lt;br /&gt;
===Bell Labs===&lt;br /&gt;
* tried to turn everything into a file&lt;br /&gt;
* wanted to make OS easier to use for programmers of their systems&lt;br /&gt;
* &amp;quot;simple&amp;quot; common API/protocol for I/O &amp;amp; IPC resources; &amp;quot;radical network transparency&amp;quot;&lt;br /&gt;
* focus on process communication&lt;br /&gt;
* portability&lt;br /&gt;
* code reuse (of their own code), but ignored legacy code problem&lt;br /&gt;
* &amp;quot;laziness&amp;quot; &amp;amp; &amp;quot;generality&amp;quot;&lt;br /&gt;
* move from centralized to distributed computing to make use of the individual machines&lt;br /&gt;
* resource utilization / efficiency&lt;br /&gt;
* &#039;&#039;&#039;with Plan 9, you don&#039;t know when you&#039;re using a network (network transparency), because everything is just a file&#039;&#039;&#039;&lt;br /&gt;
** is the overhead consistent? not always: this means that one can&#039;t predict how &amp;quot;file&amp;quot; accesses will perform in the field&lt;br /&gt;
** reliability of &amp;quot;file&amp;quot; access relies on reliability of network and remote machines&lt;br /&gt;
&lt;br /&gt;
===DSM===&lt;br /&gt;
* tried to turn everything into RAM&lt;br /&gt;
* wanted to make programming easier by not having to deal with sending data&lt;br /&gt;
* resource utilization / efficiency (while still being DSM, as much as those conflict)&lt;br /&gt;
* focus on a class of problems (not as general)&lt;br /&gt;
* an abstraction of system resources (&amp;quot;is it the right abstraction?&amp;quot;)&lt;br /&gt;
* &#039;&#039;&#039;with DSM, you don&#039;t know when you&#039;re using a network (network transparency), because everything is just memory&#039;&#039;&#039;&lt;br /&gt;
** is the overhead consistent? no: this means that one can&#039;t predict how programs will perform in the field&lt;br /&gt;
** reliability of programs relies on reliability of network and remote machines&lt;br /&gt;
&lt;br /&gt;
With hardware-based DSM: processor doesn’t know what pages do and don’t need to be secure.  It&#039;s also not a portable system design.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=NFS,_AFS,_Sprite_FS&amp;diff=1956</id>
		<title>NFS, AFS, Sprite FS</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=NFS,_AFS,_Sprite_FS&amp;diff=1956"/>
		<updated>2008-10-23T20:17:26Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: Started posting notes on NFS/AFS/SpriteFS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/sandberg-nfs.pdf Russel Sandberg et al., &amp;quot;Design and Implementation of the Sun Network Filesystem&amp;quot; (1985)]&amp;lt;br /&amp;gt;This is the original NFS paper.&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/howard-afs.pdf John H. Howard et al., &amp;quot;Scale and Performance in a Distributed File System&amp;quot; (1988)]&amp;lt;br /&amp;gt;This paper describes AFS and compares it to NFS.&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/sprite-caching.pdf Nelson, Welch, &amp;amp; Ousterhout, &amp;quot;Caching in the Sprite Network File System&amp;quot; (1988)]&amp;lt;br /&amp;gt;This paper compares Sprite to AFS and NFS.&lt;br /&gt;
&lt;br /&gt;
==Questions==&lt;br /&gt;
&lt;br /&gt;
# What were the key design goals of NFS, AFS, and Sprite&#039;s FS?&lt;br /&gt;
# How well did they achieve their goals?&lt;br /&gt;
# What are their limitations?&lt;br /&gt;
# How suitable are these filesystems in modern small networks?  Enterprise networks?  Internet-scale applications?  Why?&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
===Bell Labs===&lt;br /&gt;
* tried to turn everything into a file&lt;br /&gt;
* wanted to make OS easier to use for programmers of their systems&lt;br /&gt;
* &amp;quot;simple&amp;quot; common API/protocol for I/O &amp;amp; IPC resources; &amp;quot;radical network transparency&amp;quot;&lt;br /&gt;
* focus on process communication&lt;br /&gt;
* portability&lt;br /&gt;
* code reuse (of their own code), but ignored legacy code problem&lt;br /&gt;
* &amp;quot;laziness&amp;quot; &amp;amp; &amp;quot;generality&amp;quot;&lt;br /&gt;
* move from centralized to distributed computing to make use of the individual machines&lt;br /&gt;
* resource utilization / efficiency&lt;br /&gt;
* &#039;&#039;&#039;with Plan 9, you don’t know when you’re using a network (network transparency), because everything is just a file&#039;&#039;&#039;&lt;br /&gt;
** is the overhead consistent? not always: this means that one can’t predict how &amp;quot;file&amp;quot; accesses will perform in the field&lt;br /&gt;
** reliability of &amp;quot;file&amp;quot; access relies on reliability of network and remote machines&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Proposal_Meeting_Schedule&amp;diff=1538</id>
		<title>Proposal Meeting Schedule</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Proposal_Meeting_Schedule&amp;diff=1538"/>
		<updated>2007-10-26T03:06:11Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added Neil Dickson to the list&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Tuesday, Oct. 30th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:50 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 3:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Wednesday, Oct. 31st==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:50 AM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:00 PM&lt;br /&gt;
| Neil Dickson&lt;br /&gt;
|-&lt;br /&gt;
| 12:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Thursday, Nov. 1st==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 1:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:50 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:10 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:20 PM&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Tuesday, Nov. 6th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| David Tremayne&lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:30 PM&lt;br /&gt;
| Maria Krol&lt;br /&gt;
|-&lt;br /&gt;
| 2:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:50 PM&lt;br /&gt;
| Jeff Snell&lt;br /&gt;
|-&lt;br /&gt;
| 3:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 3:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Wednesday, Nov. 7th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| Kenneth Chan&lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:10 AM&lt;br /&gt;
| Adam McNamara&lt;br /&gt;
|-&lt;br /&gt;
| 11:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:50 AM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 12:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Thursday, Nov. 8th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 1:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:50 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:10 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:20 PM&lt;br /&gt;
| &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1521</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1521"/>
		<updated>2007-10-21T16:45:43Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added note about cli and sti only being accessible from kernel&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli      // note: this instruction can only be executed from kernel code&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti  // note: this instruction can only be executed from kernel code&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions and High-Level Problems==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;br /&gt;
&lt;br /&gt;
==Advanced Issues==&lt;br /&gt;
&lt;br /&gt;
===Deadlocks===&lt;br /&gt;
A deadlock is when a set of threads get into a state where they all need access that other threads in the set have.  The simplest case is shown below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
int semaphoreA = 1;&lt;br /&gt;
int semaphoreB = 1;&lt;br /&gt;
&lt;br /&gt;
void thread1() {&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
}&lt;br /&gt;
void thread2() {&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A deadlock scenario with this example is when thread1 gets access to A, then thread2 gets access to B before thread1 gets access to B.  This means that thread1 is waiting for thread2 to release access to B, and thread2 is waiting for thread1 to release access to A, and neither will ever happen, thus the threads are said to be deadlocked.&lt;br /&gt;
&lt;br /&gt;
In the case where there is a fixed set of controlled resources to be acquired by any given function, as long as all threads acquire them in the same order, a deadlock will be avoided (e.g. switch the first two lines of thread2).  However, this is not always possible in more complex situations.&lt;br /&gt;
&lt;br /&gt;
It is always possible to identify deadlocks when using binary semaphores if one extends them to keep track of which thread has access and which threads are waiting for access.  Conceptually, the state of the thread interaction can be represented by a directed graph, in which the nodes are threads and there is an edge from thread1 to thread2 if thread1 wants access that thread2 has.  Then any directed cycles represent deadlocks, and the nodes in the cycle are the deadlocked threads.  Unfortunately, there is no single way to avoid deadlocks in all cases.  Also note that identifying deadlocks is not the same as predicting them, which can be as difficult as avoiding them.&lt;br /&gt;
&lt;br /&gt;
===Starvation===&lt;br /&gt;
Starvation is a more general situation than deadlock, in that deadlock is a type of starvation.  Starvation refers to any situation in which one or more threads end up waiting either indefinitely long or just unreasonably long in order to perform a particular operation.&lt;br /&gt;
&lt;br /&gt;
An example of starvation that is not what is normally referred to as deadlock, is when a thread gets access to a resource and then because of scheduling or other decisions (beyond its reasonable control), it can never release access to the resource (or at least it holds it for a long time).&lt;br /&gt;
&lt;br /&gt;
Consider three threads (on a single-processor system): thread1 is an idle priority thread (only run when nothing else is running), and both thread2 and thread3 are medium-to-high priority threads.  Then the following could occur:&lt;br /&gt;
# thread1 acquires mutually exclusive access to a resource&lt;br /&gt;
# thread2 starts running an intense computation that could take days, so thread1 is no longer running&lt;br /&gt;
# thread3 tries to acquire access to the resource that thread1 has&lt;br /&gt;
# thread1 and thread3 don&#039;t run again until thread2 has stopped, which could be a while&lt;br /&gt;
&lt;br /&gt;
This example type of starvation can (almost) always be avoided if the operating system manages the thread concurrency.  Using the same extension of a binary semaphore as mentioned in the deadlocks section, the operating system can know that thread3 is waiting for thread1, and as such, thread1 should be given the higher priority of thread3 until it releases the access, at which point it returns to its original priority.  This is still fair, because thread3 depends upon thread1 releasing access before it can continue, and the starvation issue is solved.&lt;br /&gt;
&lt;br /&gt;
However, there is no general solution to starvation problems, since they can be arbitrarily complex.  (Also, the lack of a general solution for deadlocks implies that there is a lack of a general solution for starvation, since deadlocks are a type of starvation.)&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1504</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1504"/>
		<updated>2007-10-12T04:01:14Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: fixed Starvation to be general type of starvation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions and High-Level Problems==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;br /&gt;
&lt;br /&gt;
==Advanced Issues==&lt;br /&gt;
&lt;br /&gt;
===Deadlocks===&lt;br /&gt;
A deadlock is when a set of threads get into a state where they all need access that other threads in the set have.  The simplest case is shown below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
int semaphoreA = 1;&lt;br /&gt;
int semaphoreB = 1;&lt;br /&gt;
&lt;br /&gt;
void thread1() {&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
}&lt;br /&gt;
void thread2() {&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A deadlock scenario with this example is when thread1 gets access to A, then thread2 gets access to B before thread1 gets access to B.  This means that thread1 is waiting for thread2 to release access to B, and thread2 is waiting for thread1 to release access to A, and neither will ever happen, thus the threads are said to be deadlocked.&lt;br /&gt;
&lt;br /&gt;
In the case where there is a fixed set of controlled resources to be acquired by any given function, as long as all threads acquire them in the same order, a deadlock will be avoided (e.g. switch the first two lines of thread2).  However, this is not always possible in more complex situations.&lt;br /&gt;
&lt;br /&gt;
It is always possible to identify deadlocks when using binary semaphores if one extends them to keep track of which thread has access and which threads are waiting for access.  Conceptually, the state of the thread interaction can be represented by a directed graph, in which the nodes are threads and there is an edge from thread1 to thread2 if thread1 wants access that thread2 has.  Then any directed cycles represent deadlocks, and the nodes in the cycle are the deadlocked threads.  Unfortunately, there is no single way to avoid deadlocks in all cases.  Also note that identifying deadlocks is not the same as predicting them, which can be as difficult as avoiding them.&lt;br /&gt;
&lt;br /&gt;
===Starvation===&lt;br /&gt;
Starvation is a more general situation than deadlock, in that deadlock is a type of starvation.  Starvation refers to any situation in which one or more threads end up waiting either indefinitely long or just unreasonably long in order to perform a particular operation.&lt;br /&gt;
&lt;br /&gt;
An example of starvation that is not what is normally referred to as deadlock, is when a thread gets access to a resource and then because of scheduling or other decisions (beyond its reasonable control), it can never release access to the resource (or at least it holds it for a long time).&lt;br /&gt;
&lt;br /&gt;
Consider three threads (on a single-processor system): thread1 is an idle priority thread (only run when nothing else is running), and both thread2 and thread3 are medium-to-high priority threads.  Then the following could occur:&lt;br /&gt;
# thread1 acquires mutually exclusive access to a resource&lt;br /&gt;
# thread2 starts running an intense computation that could take days, so thread1 is no longer running&lt;br /&gt;
# thread3 tries to acquire access to the resource that thread1 has&lt;br /&gt;
# thread1 and thread3 don&#039;t run again until thread2 has stopped, which could be a while&lt;br /&gt;
&lt;br /&gt;
This example type of starvation can (almost) always be avoided if the operating system manages the thread concurrency.  Using the same extension of a binary semaphore as mentioned in the deadlocks section, the operating system can know that thread3 is waiting for thread1, and as such, thread1 should be given the higher priority of thread3 until it releases the access, at which point it returns to its original priority.  This is still fair, because thread3 depends upon thread1 releasing access before it can continue, and the starvation issue is solved.&lt;br /&gt;
&lt;br /&gt;
However, there is no general solution to starvation problems, since they can be arbitrarily complex.  (Also, the lack of a general solution for deadlocks implies that there is a lack of a general solution for starvation, since deadlocks are a type of starvation.)&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1502</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1502"/>
		<updated>2007-10-05T01:46:55Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added Advanced Issues section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions and High-Level Problems==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;br /&gt;
&lt;br /&gt;
==Advanced Issues==&lt;br /&gt;
&lt;br /&gt;
===Deadlocks===&lt;br /&gt;
A deadlock is when a set of threads get into a state where they all need access that other threads in the set have.  The simplest case is shown below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
int semaphoreA = 1;&lt;br /&gt;
int semaphoreB = 1;&lt;br /&gt;
&lt;br /&gt;
void thread1() {&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
}&lt;br /&gt;
void thread2() {&lt;br /&gt;
    down(semaphoreB);   // Gain access to resource B&lt;br /&gt;
    down(semaphoreA);   // Gain access to resource A&lt;br /&gt;
&lt;br /&gt;
    // use A and B&lt;br /&gt;
&lt;br /&gt;
    up(semaphoreA);     // Release access to resource A&lt;br /&gt;
    up(semaphoreB);     // Release access to resource B&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A deadlock scenario with this example is when thread1 gets access to A, then thread2 gets access to B before thread1 gets access to B.  This means that thread1 is waiting for thread2 to release access to B, and thread2 is waiting for thread1 to release access to A, and neither will ever happen, thus the threads are said to be deadlocked.&lt;br /&gt;
&lt;br /&gt;
In the case where there is a fixed set of controlled resources to be acquired by any given function, as long as all threads acquire them in the same order, a deadlock will be avoided (e.g. switch the first two lines of thread2).  However, this is not always possible in more complex situations.&lt;br /&gt;
&lt;br /&gt;
It is always possible to identify deadlocks when using binary semaphores if one extends them to keep track of which thread has access and which threads are waiting for access.  Conceptually, the state of the thread interaction can be represented by a directed graph, in which the nodes are threads and there is an edge from thread1 to thread2 if thread1 wants access that thread2 has.  Then any directed cycles represent deadlocks, and the nodes in the cycle are the deadlocked threads.  Unfortunately, there is no single way to avoid deadlocks in all cases.  Also note that identifying deadlocks is not the same as predicting them, which can be as difficult as avoiding them.&lt;br /&gt;
&lt;br /&gt;
===Starvation===&lt;br /&gt;
Starvation in the case of concurrency is when a thread gets access to a resource and then because of scheduling or other decisions (beyond its reasonable control), it can never release access to the resource (or at least it holds it for a long time).&lt;br /&gt;
&lt;br /&gt;
Consider three threads (on a single-processor system): thread1 is an idle priority thread (only run when nothing else is running), and both thread2 and thread3 are medium-to-high priority threads.  Then the following could occur:&lt;br /&gt;
# thread1 acquires mutually exclusive access to a resource&lt;br /&gt;
# thread2 starts running an intense computation that could take days, so thread1 is no longer running&lt;br /&gt;
# thread3 tries to acquire access to the resource that thread1 has&lt;br /&gt;
# thread1 and thread3 don&#039;t run again until thread2 has stopped, which could be a while&lt;br /&gt;
&lt;br /&gt;
Unlike deadlocks, this can (almost) always be avoided if the operating system manages the thread concurrency.  Using the same extension of a binary semaphore as mentioned in the deadlocks section, the operating system can know that thread3 is waiting for thread1, and as such, thread1 should be given the higher priority of thread3 until it releases the access, at which point it returns to its original priority.  This is still fair, because thread3 depends upon thread1 releasing access before it can continue, and the starvation issue is solved.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1501</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1501"/>
		<updated>2007-10-04T19:53:54Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: edited Readers-Writers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions and High-Level Problems==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1500</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1500"/>
		<updated>2007-10-04T19:52:23Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: edited Abstractions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions and High-Level Problems==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1499</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1499"/>
		<updated>2007-10-04T19:49:26Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added Message Passing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem:&#039;&#039;&#039;&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution:&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
Producer-Consumer gets complicated with networking, because there are many separate Producer-Consumer relationships along the way from one process on one computer to another process on another computer.  This is a case where message passing is useful.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Network three-way handshake to open TCP connection:&#039;&#039;&#039;&lt;br /&gt;
{|&lt;br /&gt;
! Computer A&lt;br /&gt;
! Messages&lt;br /&gt;
! Computer B&lt;br /&gt;
|-&lt;br /&gt;
| closed&lt;br /&gt;
| &lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| send SYN (synchronize) message &amp;amp;rarr;&lt;br /&gt;
| closed&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| half-open&lt;br /&gt;
| &amp;amp;larr; send SYN ACK (acknowledge) message&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| send ACK (acknowledge) message &amp;amp;rarr;&lt;br /&gt;
| half-open&lt;br /&gt;
|-&lt;br /&gt;
| open&lt;br /&gt;
| &lt;br /&gt;
| open&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This is robust (scalable to size of the internet), but slow, whereas shared memory isn’t scalable, but fast.&lt;br /&gt;
Still a lot of error cases to be handled: code you don’t want to mess with unless you really know what you’re doing.&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1498</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1498"/>
		<updated>2007-10-04T19:37:29Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: finished Producer-Consumer and Readers-Writers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer Problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.  All of the sleeping and waking up is handled by the OS when reading/writing from/to a pipe.  All that the producer must do is write to the pipe, and all that the consumer must do is read from the pipe.&lt;br /&gt;
&lt;br /&gt;
The following pipes (redirects) the output of ls to the file foo.txt:&lt;br /&gt;
&amp;lt;code&amp;gt;ls &amp;amp;gt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes foo.txt as the input of more:&lt;br /&gt;
&amp;lt;code&amp;gt;more &amp;amp;lt; foo.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following pipes the output of ls to sort, whose output is piped to tr, whose output is piped to more:&lt;br /&gt;
&amp;lt;code&amp;gt;ls | sort | tr –d ‘_’ | more&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Why not let ls run to completion before calling sort?  What if ls never finishes?  Also, sort can be operating on a second processor while ls is still producing output.&lt;br /&gt;
&lt;br /&gt;
===Readers-Writers===&lt;br /&gt;
The Readers-Writers Problem is a case in which:&lt;br /&gt;
* Any number of readers are allowed access concurrently, and&lt;br /&gt;
* Only one writer is allowed access concurrently (i.e. mutual exclusion for writers)&lt;br /&gt;
&lt;br /&gt;
An example of this is airline reservation.  There is a database of seats sold, and one wants to allow as many people as possible to look at what seats are available, but only one person can buy a seat at the same time.  Then, as soon as someone buys a seat, the info needs to be updated as soon as possible to everyone.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A solution to problem&#039;&#039;&#039;&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no new readers and no other writers&lt;br /&gt;
* wait for all readers to finish&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to readers and writers&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A faster solution&#039;&#039;&#039; (for cases where the data structure allows reading while writing)&lt;br /&gt;
When a writer wants to write:&lt;br /&gt;
* no other writers&lt;br /&gt;
* writer writes&lt;br /&gt;
* gives back access to writers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1497</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1497"/>
		<updated>2007-10-04T19:14:20Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: more Producer-Consumer&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer===&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
What needs to occur?&lt;br /&gt;
* The consumer needs to sleep when there is no work to be done (empty buffer), and wake up when there is once again work to be done (non-empty buffer).&lt;br /&gt;
* The producer needs to sleep when the buffer is full (it can&#039;t put any more in), and wake up when the buffer is no longer full (it can put more in again).&lt;br /&gt;
&lt;br /&gt;
A classic bug is for either the producer or the consumer to never wake up after some point.&lt;br /&gt;
&lt;br /&gt;
Modern OSs provide a simple solution to this problem, namely &#039;&#039;pipes&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1496</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1496"/>
		<updated>2007-10-04T19:08:10Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added Semaphore and started on Producer-Consumer&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making a massively parallel system with separate memories isn’t hard, but making it work with shared memory is very difficult.&lt;br /&gt;
&lt;br /&gt;
==Abstractions==&lt;br /&gt;
Test-and-set and swap are quite low-level and painful to use in complex cases, so we want to wrap them in an abstraction.&lt;br /&gt;
&lt;br /&gt;
===Semaphore===&lt;br /&gt;
A semaphore is the simplest abstraction.  It is a counter variable with two associated functions, &#039;&#039;up&#039;&#039; and &#039;&#039;down&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The initial value of the counter variable indicates how many threads are allowed to gain access concurrently.  This initial value is 1 for a &#039;&#039;binary semaphore&#039;&#039;, a.k.a. a &#039;&#039;mutex&#039;&#039;, a.k.a a simple &#039;&#039;lock&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;down(sem)&#039;&#039;: This atomically tries to decrement the counter, and if it can&#039;t (i.e. the counter was already 0), it waits until it can (i.e. until the counter is above 0 again).  down is called to gain access.&lt;br /&gt;
&#039;&#039;up(sem)&#039;&#039;: This atomically increments the counter.  up is called to release access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         list_semaphore = 1;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    down(list_semaphore);   // Gain access to the list&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    up(list_semaphore);     // Release access to the list&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Producer-Consumer==&lt;br /&gt;
The (Bounded-Buffer) Producer-Consumer problem itself is more of an abstraction of problems rather than an abstraction of solutions, but its common solutions are abstractions as well.  Suppose that there is a thread (or multiple threads) producing items in a buffer, and a thread (or multiple threads) consuming items in a buffer.  An example of this is an assembly line (referring to manufacturing, not assembly language).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1495</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1495"/>
		<updated>2007-10-04T16:00:49Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added second example to Multi-Processor System&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using test-and-set:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Using swap:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        mov         eax,1&lt;br /&gt;
        lock xchg   using_list,eax  ;bus-locked exchange (swap)&lt;br /&gt;
        test        eax,eax         ;check if eax is 0 (i.e. check if using_list was 0)&lt;br /&gt;
        jz          GotAccess       ;got access if the using_list was 0 at the beginning of the exchange&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1494</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1494"/>
		<updated>2007-10-04T15:44:19Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added Multi-Processor System&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Multi-Processor System====&lt;br /&gt;
* For this, a special test-and-set instruction or a special swap instruction are needed.&lt;br /&gt;
* The operations need to be &#039;&#039;bus-locked&#039;&#039;, which the processor does using its &#039;&#039;bus snooping&#039;&#039; mechanism, to make sure that &#039;&#039;cache coherence&#039;&#039; is maintained (e.g. all caches get the updated value of using_list before the operation is considered complete).&lt;br /&gt;
* An alternative approach to the one presented below, (which can still be implemented using test-and-set or swap), is known as &#039;&#039;spinlock&#039;&#039;, which is to keep checking repeatedly until the thread gets access.  This works for cases of “low contention” i.e. when you won’t have to be checking too long, because another processor has the access, and it will only have access for a short period of time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;test-and-set&#039;&#039;: sets (to 1) a bit and indicates whether it was set (1) or clear (0) before&lt;br /&gt;
&#039;&#039;swap&#039;&#039;: one can swap a variable with the value 1, get back its previous value, and check whether it was 0 or 1 before&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {&lt;br /&gt;
    CheckForAccess:&lt;br /&gt;
        lock bts    using_list,0    ;bus-locked bit test-and-set on bit 0 of using_list&lt;br /&gt;
        jnc         GotAccess       ;got access if the bit was 0 at the beginning of the test-and-set&lt;br /&gt;
                                    ;previous value of bit is in carry flag, so no carry (nc) means bit was 0 before&lt;br /&gt;
    }&lt;br /&gt;
    sleep(0);                       // go to sleep so that other threads will run&lt;br /&gt;
    asm {&lt;br /&gt;
        jmp         CheckForAccess&lt;br /&gt;
    GotAccess:&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1493</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1493"/>
		<updated>2007-10-04T15:23:16Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: fixed Single-Processor System example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        asm {    // enable interrupts, because access wasn&#039;t acquired this time&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);// go to sleep so that other threads will run&lt;br /&gt;
        asm {    // disable interrupts to test again&lt;br /&gt;
            cli&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
    asm {        // enable interrupts, because this thread now has mutually exclusive access&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1492</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1492"/>
		<updated>2007-10-04T15:18:12Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: added part of Tackling Mutual Exclusion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
===A Broken Attempt===&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    while (using_list) {&lt;br /&gt;
        // Just waiting here, so nothing goes in the loop&lt;br /&gt;
        // Could sleep, but this is a broken attempt anyway&lt;br /&gt;
    }&lt;br /&gt;
    using_list = 1;&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
    // this part of the code is known as a critical region&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What&#039;s the problem?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;TOCTOU&#039;&#039;: Time Of Check to Time Of Use&lt;br /&gt;
&lt;br /&gt;
If the thread went to sleep immediately before running the “using_list = 1;” line, another thread might also get access to the list, since using_list is still zero.&lt;br /&gt;
&lt;br /&gt;
===Fixing the Broken Attempt===&lt;br /&gt;
&#039;&#039;Atomic operation&#039;&#039;: an operation that cannot be “split apart”&lt;br /&gt;
&lt;br /&gt;
One machine instruction can be made atomic easily, but any more are not guaranteed to be atomic in hardware.  As such, special machine instructions are needed to handle this.&lt;br /&gt;
&lt;br /&gt;
====Single-Processor System====&lt;br /&gt;
* One can simply disable interrupts while acquiring access, since then no threads can jump in between the checking of using_list and the setting of using_list.&lt;br /&gt;
* Interrupts should only be disabled for so long, because devices depend on the system reacting to interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
linked_list list;&lt;br /&gt;
int         using_list = 0;&lt;br /&gt;
&lt;br /&gt;
void use_list() {&lt;br /&gt;
    asm {        // disable interrupts&lt;br /&gt;
        cli&lt;br /&gt;
    }&lt;br /&gt;
    if (using_list) {&lt;br /&gt;
        asm {    // enable interrupts&lt;br /&gt;
            sti&lt;br /&gt;
        }&lt;br /&gt;
        sleep(0);&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
        using_list = 1;&lt;br /&gt;
    }&lt;br /&gt;
    asm {        // enable interrupts&lt;br /&gt;
        sti&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // use the list&lt;br /&gt;
&lt;br /&gt;
    using_list = 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes later today.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1491</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1491"/>
		<updated>2007-10-04T05:05:17Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: rewording to avoid confusion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes tomorrow.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1490</id>
		<title>Basic Synchronization Principles</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Basic_Synchronization_Principles&amp;diff=1490"/>
		<updated>2007-10-04T05:03:44Z</updated>

		<summary type="html">&lt;p&gt;Ndickson: Added part of the article&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article discusses approaches and issues involved with managing &#039;&#039;concurrency&#039;&#039;.  The objective is to be able to do multiple things at the same time (i.e. using multiple threads), while avoiding coordination problems.&lt;br /&gt;
&lt;br /&gt;
==Three Concurrency Problems to Address==&lt;br /&gt;
===Synchronization===&lt;br /&gt;
Synchronization, a.k.a. serialization, is making tasks run in a serial sequence (one at a time) instead of all at once, because of dependencies between the tasks.  An example of this is that when building a house the person building basement needs to tell person building ground floor when the basement is done, so that work on the ground floor can start.&lt;br /&gt;
&lt;br /&gt;
In a more complex case, it may be that parts of the ground floor are only dependent on parts of the basement, so the person building the ground floor may be able to begin work sooner.  This more complex case is related to the Producer-Consumer pattern discussed below.&lt;br /&gt;
&lt;br /&gt;
===Mutual Exclusion===&lt;br /&gt;
Mutual exclusion is controlling access to shared resources.  Continuing with the construction example, only one person can use the same drill at the same time, or things get ugly.  Likewise, things like a disk or a data structure may often require a limit of one thread accessing it at a time to ensure the desired effect.&lt;br /&gt;
&lt;br /&gt;
===Data Sharing===&lt;br /&gt;
Data sharing, which is closely related to mutual exclusion, is managing the duplication of data structures or devices.  These duplicated resources would in turn send updates to a manager of sorts, and would be informed of updates by the manager.&lt;br /&gt;
&lt;br /&gt;
An example of this is what Massively-Multiplayer Online (MMO) games do to avoid certain problems associated with network lag (the delay between sending a notification of action to the server and receiving the result of that action).  If this was done with all world data processing done on the server, a player wouldn&#039;t know whether or not an opponent had been defeated until the network responds with the result.  With data sharing, the client program can immediately determine, based on its copy of the local environment, that it is likely that the opponent has been defeated, and then receive either confirmation or rejection of that assumption when the server responds with its &amp;quot;official version&amp;quot; of events.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Message Passing&#039;&#039; is this way of managing duplication of state via coordinated communication between copies.&lt;br /&gt;
&#039;&#039;Shared Memory&#039;&#039; is a way to simulate duplication of state in a way that in some circumstances, can allow concurrent access, and generally access with less overhead than message passing.&lt;br /&gt;
&lt;br /&gt;
==Tackling Mutual Exclusion==&lt;br /&gt;
&#039;&#039;&#039;NOTE: I&#039;ll add the rest of the notes tomorrow.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Ndickson</name></author>
	</entry>
</feed>