Difference between revisions of "COMP 3000 Essay 1 2010 Question 6"

From Soma-notes
Jump to navigation Jump to search
(Final edit: Edited a few lines. Fixed typos.)
Line 80: Line 80:
   
   
===Cause of Race Condition===
===Cause of Race Condition===
After it was determined that the reason the control center wasn't recieving alerts was due to the XA/21 failing, GE conducted an investigation into the matter. Through extensive testing, the investigation revealed that, given a unique combination of events and alarm conditions on the equipment being monitored, a race condition was triggered.
Two processes came into contention when attempting to access the same data structure. Due to a programming error in one of these processes, both processes were able to get write access to that data structure. The data structure became corrupt which led to the alarm event to enter an infinite loop.
Events began to cue up after the alarm event crashed. Because they could not be processed, the server that hosted the alarm process went down after half an hour, unable to handle the burden. A backup server attempted to handle the burden, but by then the accumulation of events was too much for the back-up server to handle.


The XA/21 energy management system failed due to three sagging power lines being tripped simultaneously. These three seperate events then attempted to execute on a shared state, causing the main system to fail. A back-up server went online to attempt to handle the requests. By the time it kicked in the accumulation of events since the main system failure caused the back-up to fail as well.
The XA/21 energy management system failed due to three sagging power lines being tripped simultaneously. These three seperate events then attempted to execute on a shared state, causing the main system to fail. A back-up server went online to attempt to handle the requests. By the time it kicked in the accumulation of events since the main system failure caused the back-up to fail as well.

Revision as of 01:37, 15 October 2010

Question

What are some examples of notable systems that have failed due to flawed efforts at mutual exclusion and/or race conditions? How significant was the failure in each case?

Answer

Overview

A race condition occurs when two or more processes receive write access to shared data simultaneously. The end result might lead to unpredictable results depending on the exact timing of those processes. Consequently a major system failure can occur.

Introduction

Race conditions are notorious in the history of software bugs. Examples range from a section of Java code causing an application to halt, the corruption of web services, or the failure of a life-critical system with fatal consequences. All of the system failures due to race conditions have common patterns and are caused by inadequate management of shared memory.

During development of these systems, programmers do not realize that their designs incorporate a race condition until they occur. They are unexpected, infrequent, and the specific failure conditions are difficult to duplicate. Therefore the origin of the failure may take weeks up to years to discover. This is also dependent on the complexity of the system. A lack of testing before deployment may also be responsible.

Race conditions occasionally reoccur in the same software. An example of this is when the race condition is mistaken as another problem. Another example is when a system contains multiple race conditions. Programming languages where memory management is an important aspect of development, such as Assembly and C/C++, are also common to all of the systems.

In this article, we will examine the most well known cases involving race conditions. For each of the cases we will explain why the race condition occurred, its significance and the aftermath of the failure.


Examples

Therac-25

The Therac-25 was an x-ray machine developed in Canada by Atomic Energy of Canada Limited (AECL). The machine was used to treat people using radiation therapy. Between 1985 and 1987 six patients were given overdoses of radiation by the machine. Half these patients died due to the accident. The incident is quite possibly the most infamous software bug relating to race conditions. The cause of the incidents has been traced back to a programming bug which caused a race-condition. The Therac-25 software was written by a single programmer in PDP-11 assembly language. Portions of code were reused from software in the previous Therac-6 and Therac-20 machines. The main portion of the code runs a function called “Treat” this function determins which of the programs 8 main subroutines it should be executing. The Keyboard handler task ran concurrently with “Treat”.

Main Subroutines

The Therac-25 had 8 main subroutines it made use of. The Datent had its own helper routine called magnet which prepared the x-rays magnets to administer the correct dosage of radiation.

  1. Reset
  2. Datent
    1. Magnet
  3. Set Up Done
  4. Set Up Test
  5. Patient Treatment
  6. Pause Treatment
  7. Terminate Treatment
  8. Date, Time, ID Changes


The Datent subroutine communicated with the keyboard hander task through a shared variable which signaled if the operator was finished entering the necessary data. Once the Datent subroutine sets the flag signifying the operator has entered the necessary information it allows the main program to move onto the next subroutine. If the flag was not set the “Treat” task reschedules itself in turn rescheduling the Datent subroutine. This continues until the shared data entry flag is set.


The Datent subroutine was also responsible for preparing the x-ray to administer the correct radiation dosage. The subroutine was setup so that before returning to “Treat” instructions to move on to the next of its 8 subroutines it would first call the “Magnet” subroutine. This subroutine parsed the operators input and moved the x-ray machines magnets into position to administer the prescribed radiation. This magnet subroutine took approximately 8 seconds to complete and while it ran the keyboard handler was also running. If the operator modified the data before the “magnet” subroutine returned their changes would not be register and the x-ray strength would already be set to its prior value ignoring the operator’s changes.


Example Bug Situation

The situation below illustrates a chain of events that would result in an unintended dose of radiation being administered.

  1. Operator types up data, presses return
  2. (Magnet subroutine is initiated)
  3. Operator realizes there is an extra 0 in the radiation intensity field
  4. Operator quickly moves cursor up and fixes the error and presses return again.
  5. Magnets are set to previous power level .subroutine returns
  6. Program moves on to next subroutine without registering changes
  7. Patient is administered a lethal overdose of radiation


Root Causes & Outcomes

A number of factors contributed to the failure of the Therac-25. The code was put together by a single programmer and no proper testing was conducted. In addition, code was reused from previous generation machines without verifying it was fully compatible with the new hardware. Previous Therac-6 and Therac-20 had hardware interrupts which prevent race conditions from occurring. It is clear that proper planning and forethought could have prevented this incident.

Six incidents involving the Therac-25 took place over the span 1985 and 1987. It took 2 years until the FDA took the machines out of service. The FDA forced AECL to make modifications to the Therac-25 before it was allowed back on the market. Software bugs were fixed to suspend all other operations while the magnets positioned themselves to administer the correct radiation strength. In addition, a dead mans switch was added the switch was a foot pedal which the operator must hold down to enable motion of the x-ray machine. This prevented the operator of being unaware of changes in the x-ray machines state.

After these changes were made the Therac-25 was reintroduced into the market in 1988. Some of the machines are still in service today.


Black-out of 2003

An energy management system failed due to a race condition, ultimately leading to Ontario and parts of the United States experiencing a black-out.

The incident occured on August 14th, 2003, when a power plant located in Eastlake, Ohio went offline. The system was set up so that if this were to occur, a warning would be sent to FirstEnergy's control center in Akron, Ohio. Upon recieving this warning, power would be re-routed through other plants to isolate the failure.However, no warning was recieved, resulting in a domino effect causing ultimately over 100 power plants to go offline.

FirstEnergy at the time was using General Eletric's Unix-based XA/21 energy management system. This system was responsible for alerting the operators of the control center whenever there was a problem. Unfortunately, a flaw in the software caused the system to crash.The energy management system crashed silently, so that the operators at the control center had no idea they were not receiving alerts the otherwise would be. Without any warnings, the operators had no idea the power plant went offline, and so took no measures to prevent the cascading effect leading to the black-out.

Cause of Race Condition

After it was determined that the reason the control center wasn't recieving alerts was due to the XA/21 failing, GE conducted an investigation into the matter. Through extensive testing, the investigation revealed that, given a unique combination of events and alarm conditions on the equipment being monitored, a race condition was triggered.

Two processes came into contention when attempting to access the same data structure. Due to a programming error in one of these processes, both processes were able to get write access to that data structure. The data structure became corrupt which led to the alarm event to enter an infinite loop.

Events began to cue up after the alarm event crashed. Because they could not be processed, the server that hosted the alarm process went down after half an hour, unable to handle the burden. A backup server attempted to handle the burden, but by then the accumulation of events was too much for the back-up server to handle.

The XA/21 energy management system failed due to three sagging power lines being tripped simultaneously. These three seperate events then attempted to execute on a shared state, causing the main system to fail. A back-up server went online to attempt to handle the requests. By the time it kicked in the accumulation of events since the main system failure caused the back-up to fail as well.

Aftermath

With the system failure that ultimately led to 256 plants going offline, a massive black-out was experienced in North Eastern USA and Ontario. It is estimated that 55 million people were effected by the black-out. Investigations in the aftermath revealed both negligence on FirstEnergy's part and revealed the deeply embedded bug within the XA/21 energy management system. The bug has since been fixed with a patch.

The NASA Mars-Rover

The NASA Mars-Rover incident is another well known case of system failure due to race conditions. The Mars-Rover is a six wheeled driven, four wheeled steered vehicle designed by NASA to navigate the surface of Mars in order to gather videos, images, samples or and possible data about the planet. NASA landed two Rover vehicles, the Spirit and Opportunity Rovers, on January 4 and January 25, 2004, respectively. The Rover was controlled on a daily basis by the NASA team on earth by sending messages and tasks. Each solar day in the life of the Rover is called a Sol.

Hardware design and architecture

The vehicle's main operating equipment consists of a set of high-resolution cameras, a collection of specialized spectrometers and a set of radio antennas for transmitting and receiving data. The main computer was built around a BAE RAD-6000 CPU (Rad6k), RAM and non-volatile memory (a combination of FLASH and ROM).

Software design

The Rover is controlled by a VxWorks real-time operating system. The Rover flight software was mostly implemented in ANSI C, with some fragements of code written in C++ and assembly. The rover relied on an autonomous system that enabled it to drive itself and carry out a number of self-maintenance operations. The system implements a time-multiplexing system, where all processes share and access resources on the single CPU. The Rover records progress through the use of three primary log-file systems: event reports (EVRs), engineering data (EH&A) and data products.

System failures and vulnerabilities

The first race-condition bug occured in the Spirit Rover Sol 131. The initilazation module (IM) process was preparing to increment a counter that keeps track of the number of times an initilazation occured, in order to do that, the IM process must request permission and be granted access to write that counter to memory (critical section). While requesting the permission, another process was granted access to use that very same piece of memory (critical section). This resulted in the IM process generating a fatal exception through its EVR log. The exception lead to loss and trouble in transmitting data to the NASA team on earth, which eventually led to the Rover being in a halted state for a few days. In efforts to keep the Rover functioning, the NASA team attempted to avoid the problem by restricting another module from operating during that time-frame, allowing enough time for the IM process to carry on its task. However, the NASA team were aware of the fact that the bug could actually resurface again. And it actually did later on in the Spirit Rover Sol 209 and then on the Opportunity Rover on Sol 596 and Sol 622.

A similar type of error occurred on the Spirit Sol 136, this time the Imaging Services Module (IMG) was involved. Just as the NASA team requested data from the Rover to be transmitted, the IMG started a deactivation state, the IMG reading cycles from memory were suddenly interrupted by the deactivation process which was attempting to power off the piece of memory associated with the IMG reading task. This resulted in a failure to return the requested data from the Rover.

Aftermath and current status

While those race conditions errors were clearly due to a lack of memory management and proper co-ordination among processes, they were largely unexpected and unforeseen. In contrast to the other cases mentioned so far, the consequences that the NASA team had to deal with weren't life threatening. So it seems that their main concern was to keep the Rovers functioning in order to obtain as much information as possible. No effort was even made to alter the software. Also, one could imagine that the task of examining and debugging those errors was quite a challenge, since they couldn't deal with the Rovers physically, rather everything was done via transmission and messages. Another thing to note is the fact that the single CPU used in those Rovers had a lot to deal with beside the usual software implementation. Had NASA considered the possibility of implementing a multiple CPU design, things could have been different.

The Spirit Rover has experienced a number of problems since then. Most recent reports revealed that the Rover has been largely inactive, with no data being received from the Rover. The Opportunity Rover on the other hand continues to function successfully.

Windows Blue-Screens-Of-Death

When a problem in Windows forces the operation systems to fail, the computer often displays an error screen, know as Stop message, that describes the cause of the problem, most people called this a Blue Screen of Death (BSOD).

The error 0X0000001a, MEMORY_MANAGEMENT, occurs because of the race condition of memory management. It is a hardware error related to memory management. It is possible that the computer can not timely get enough power to the memory for the process.

The BSOD has surfaced on a number of Windows versions including Windows 7. It has also caused system failures in airports, ATM machines and street hoardings. However, the most notable public incident happened on the opening ceremony of the 2008 Beijing Summer Olympics in China, when one of the projectors crashed because of a BSOD bug.

Conclusions

The main challenge with race condition errors is that they're usually unpredictable and can be triggered in various ways depending on the processes involved, the implementation of software, the hardware design and the surrounding environment. However, the human element plays a huge part here as well, as far as applying the required amount of testing, anticipating possible schemes and coming up with different situations where an error might occur.

A handful of commercial software tools have been developed to address and detect race conditions errors. Most recently, a US software company that goes by the name of ReplaySolutions has been awarded a patent from the US government for developing an innovative kit for debugging race conditions found in software.

As the industry strives for faster and more efficient level of performance through the use of multi-processor systems and multi-core chips, this area continues to be a vast field for research and innovation within the computing world.

References