COMP 3000 Essay 1 2010 Question 6: Difference between revisions
Line 84: | Line 84: | ||
===Aftermath=== | ===Aftermath=== | ||
The cascading effect started by the three downed power lines eventually reached the New York City power grid, leaving an estimated 55 million people without power for two days. New York in particular felt the immediate effects of the blackout, with 3000 fires being reported, and emergency services responding to twice the average amount of distress calls. Eleven fatalities have been contributed to the blackout. It has been estimated that the total cost of the blackout was 6 billion dollars. | |||
A major impact the blackout created was that it revealed a very major yet subtle bug, as well as expose the shortcomings of the power grid at that time. GE has since released a patch that fixes the bug, as well as instruction for properly installing the system. The US-Canada Power System Outage Task Force released a report that included 46 recommendations to prevent future blackouts. Congress, upon receiving this report, passed the Energy Policy Act of 2005. Among the standards that followed this policy are the need for operators at control centers to be trained to deal with critical events, a requirement for trees to be kept clear of transmission lines, and for any system involved in grid operations must be able to handle a power line fault as well as any other failure that could otherwise endanger the grid. | |||
All in all, there has been no conclusive evidence that these changes have helped prevent blackouts, as the numbers have been fairly stable. However, with these new standards, it is unlikely the events of the 2003 blackout will reoccur in the United States. | |||
== The NASA Mars-Rover == | == The NASA Mars-Rover == |
Revision as of 08:23, 15 October 2010
Question
What are some examples of notable systems that have failed due to flawed efforts at mutual exclusion and/or race conditions? How significant was the failure in each case?
Answer
Overview
A race condition occurs when two or more processes receive write access to shared data simultaneously. The end result might lead to unpredictable results depending on the exact timing of those processes. Consequently a major system failure can occur.
Introduction
Race conditions are notorious in the history of software bugs. Examples range from a section of Java code causing an application to halt, the corruption of web services, or the failure of a life-critical system with fatal consequences. All of the system failures due to race conditions have common patterns and are caused by inadequate management of shared memory.
During development of these systems, programmers do not realize that their designs incorporate a race condition until they occur. They are unexpected, infrequent, and the specific failure conditions are difficult to duplicate. Therefore the origin of the failure may take weeks up to years to discover. This is also dependent on the complexity of the system. A lack of testing before deployment may also be responsible.
Race conditions occasionally reoccur in the same software. An example of this is when the race condition is mistaken as another problem. Another example is when a system contains multiple race conditions. Programming languages where memory management is an important aspect of development, such as Assembly and C/C++, are also common to all of the systems.
In this article, we will examine the most well known cases involving race conditions. For each of the cases we will explain why the race condition occurred, its significance and the aftermath of the failure.
Examples
Therac-25
The Therac-25 was an x-ray machine developed in Canada by Atomic Energy of Canada Limited (AECL). The machine was used to treat people using radiation therapy. Between 1985 and 1987 six patients were given overdoses of radiation by the machine. Half these patients died due to the accident. The incident is quite possibly the most infamous software bug relating to race conditions. The cause of the incidents has been traced back to a programming bug which caused a race-condition. The Therac-25 software was written by a single programmer in PDP-11 assembly language. Portions of code were reused from software in the previous Therac-6 and Therac-20 machines. The main portion of the code runs a function called “Treat” this function determins which of the programs 8 main subroutines it should be executing. The Keyboard handler task ran concurrently with “Treat”.
Main Subroutines
The Therac-25 had 8 main subroutines it made use of. The Datent had its own helper routine called magnet which prepared the x-rays magnets to administer the correct dosage of radiation.
- Reset
- Datent
- Magnet
- Set Up Done
- Set Up Test
- Patient Treatment
- Pause Treatment
- Terminate Treatment
- Date, Time, ID Changes
The Datent subroutine communicated with the keyboard hander task through a shared variable which signaled if the operator was finished entering the necessary data. Once the Datent subroutine sets the flag signifying the operator has entered the necessary information it allows the main program to move onto the next subroutine. If the flag was not set the “Treat” task reschedules itself in turn rescheduling the Datent subroutine. This continues until the shared data entry flag is set.
The Datent subroutine was also responsible for preparing the x-ray to administer the correct radiation dosage. The subroutine was setup so that before returning to “Treat” instructions to move on to the next of its 8 subroutines it would first call the “Magnet” subroutine. This subroutine parsed the operators input and moved the x-ray machines magnets into position to administer the prescribed radiation. This magnet subroutine took approximately 8 seconds to complete and while it ran the keyboard handler was also running. If the operator modified the data before the “magnet” subroutine returned their changes would not be register and the x-ray strength would already be set to its prior value ignoring the operator’s changes.
Example Bug Situation
The situation below illustrates a chain of events that would result in an unintended dose of radiation being administered.
- Operator types up data, presses return
- (Magnet subroutine is initiated)
- Operator realizes there is an extra 0 in the radiation intensity field
- Operator quickly moves cursor up and fixes the error and presses return again.
- Magnets are set to previous power level .subroutine returns
- Program moves on to next subroutine without registering changes
- Patient is administered a lethal overdose of radiation
Root Causes & Outcomes
A number of factors contributed to the failure of the Therac-25. The code was put together by a single programmer and no proper testing was conducted. In addition, code was reused from previous generation machines without verifying it was fully compatible with the new hardware. Previous Therac-6 and Therac-20 had hardware interrupts which prevent race conditions from occurring. It is clear that proper planning and forethought could have prevented this incident.
Six incidents involving the Therac-25 took place over the span 1985 and 1987. It took 2 years until the FDA took the machines out of service. The FDA forced AECL to make modifications to the Therac-25 before it was allowed back on the market. Software bugs were fixed to suspend all other operations while the magnets positioned themselves to administer the correct radiation strength. In addition, a dead mans switch was added the switch was a foot pedal which the operator must hold down to enable motion of the x-ray machine. This prevented the operator of being unaware of changes in the x-ray machines state.
After these changes were made the Therac-25 was reintroduced into the market in 1988. Some of the machines are still in service today.
Black-out of 2003
On August 14th, 2003, a massive black out effected Ontario and the North Eastern United States. It all began with a power plant located in Eastlake Ohio going offline. This occured doing a time of high electric demand, meaning that the power would have to come from elsewhere. Three power lines began to sag due to the greater strain put on them, causing them to come into contact with overgrown trees. These three power lines failed, in turn putting more strain on other power lines. The result was a cascading effect ultimately leading to 256 power plants going offline.
FirstEnergy's control center in Akron, Ohio was responsible for balancing the load on these power lines. However, they were not recieving and warnings because their energy management system had silently crashed. The control center operators were both unable to receive warnings, and had no idea that they weren't. The energy management system in question is the Unix-based XA/21, created by General Electric. A software flaw in the energy management system caused the system failure, resulting in the control center operators being unaware of power imbalances, and therefore unable to prevent the blackout.
Cause of Race Condition
After it was revealed that the XA/21 system crash was responsible for the control center not receiving alerts, an investigation was launched to determine the cause. After 8 weeks of testing, GE was able to recreate the the unique combination of events and alarm conditions that triggered the bug. A race condition was discovered, as two processes came into contention over the same data structure. Through a flaw in one of the processes' coding, both processes were able to get write access to the data structure. This corrupted the data necessary to trigger an alarm, sending the alarm event into an infinite loop.
Because the alarm event had crashed, events were unable to be processed as they came into the control center. The build up of events caused the energy managements system's server to go down within thirty minutes of the alarm event crash. A backup server kicked in to attempt to manage the load, but by then there were too many events cued up to handle, and the backup server went down as well.
Aftermath
The cascading effect started by the three downed power lines eventually reached the New York City power grid, leaving an estimated 55 million people without power for two days. New York in particular felt the immediate effects of the blackout, with 3000 fires being reported, and emergency services responding to twice the average amount of distress calls. Eleven fatalities have been contributed to the blackout. It has been estimated that the total cost of the blackout was 6 billion dollars.
A major impact the blackout created was that it revealed a very major yet subtle bug, as well as expose the shortcomings of the power grid at that time. GE has since released a patch that fixes the bug, as well as instruction for properly installing the system. The US-Canada Power System Outage Task Force released a report that included 46 recommendations to prevent future blackouts. Congress, upon receiving this report, passed the Energy Policy Act of 2005. Among the standards that followed this policy are the need for operators at control centers to be trained to deal with critical events, a requirement for trees to be kept clear of transmission lines, and for any system involved in grid operations must be able to handle a power line fault as well as any other failure that could otherwise endanger the grid.
All in all, there has been no conclusive evidence that these changes have helped prevent blackouts, as the numbers have been fairly stable. However, with these new standards, it is unlikely the events of the 2003 blackout will reoccur in the United States.
The NASA Mars-Rover
The NASA Mars-Rover incident is another well known case of system failure due to race conditions. The Mars-Rover is a six wheeled driven, four wheeled steered vehicle designed by NASA to navigate the surface of Mars in order to gather videos, images, samples or and possible data about the planet. NASA landed two Rover vehicles, the Spirit and Opportunity Rovers, on January 4 and January 25, 2004, respectively. The Rover was controlled on a daily basis by the NASA team on earth by sending messages and tasks. Each solar day in the life of the Rover is called a Sol.
Hardware design and architecture
The vehicle's main operating equipment consists of a set of high-resolution cameras, a collection of specialized spectrometers and a set of radio antennas for transmitting and receiving data. The main computer was built around a BAE RAD-6000 CPU (Rad6k), RAM and non-volatile memory (a combination of FLASH and ROM).
Software design
The Rover is controlled by a VxWorks real-time operating system. The Rover flight software was mostly implemented in ANSI C, with some fragements of code written in C++ and assembly. The rover relied on an autonomous system that enabled it to drive itself and carry out a number of self-maintenance operations. The system implements a time-multiplexing system, where all processes share and access resources on the single CPU. The Rover records progress through the use of three primary log-file systems: event reports (EVRs), engineering data (EH&A) and data products.
System failures and vulnerabilities
The first race-condition bug occured in the Spirit Rover Sol 131. The initilazation module (IM) process was preparing to increment a counter that keeps track of the number of times an initilazation occured, in order to do that, the IM process must request permission and be granted access to write that counter to memory (critical section). While requesting the permission, another process was granted access to use that very same piece of memory (critical section). This resulted in the IM process generating a fatal exception through its EVR log. The exception lead to loss and trouble in transmitting data to the NASA team on earth, which eventually led to the Rover being in a halted state for a few days. In efforts to keep the Rover functioning, the NASA team attempted to avoid the problem by restricting another module from operating during that time-frame, allowing enough time for the IM process to carry on its task. However, the NASA team were aware of the fact that the bug could actually resurface again. And it actually did later on in the Spirit Rover Sol 209 and then on the Opportunity Rover on Sol 596 and Sol 622.
A similar type of error occurred on the Spirit Sol 136, this time the Imaging Services Module (IMG) was involved. Just as the NASA team requested data from the Rover to be transmitted, the IMG started a deactivation state, the IMG reading cycles from memory were suddenly interrupted by the deactivation process which was attempting to power off the piece of memory associated with the IMG reading task. This resulted in a failure to return the requested data from the Rover.
Aftermath and current status
While those race conditions errors were clearly due to a lack of memory management and proper co-ordination among processes, they were largely unexpected and unforeseen. In contrast to the other cases mentioned so far, the consequences that the NASA team had to deal with weren't life threatening. So it seems that their main concern was to keep the Rovers functioning in order to obtain as much information as possible. No effort was even made to alter the software. Also, one could imagine that the task of examining and debugging those errors was quite a challenge, since they couldn't deal with the Rovers physically, rather everything was done via transmission and messages. Another thing to note is the fact that the single CPU used in those Rovers had a lot to deal with beside the usual software implementation. Had NASA considered the possibility of implementing a multiple CPU design, things could have been different.
The Spirit Rover has experienced a number of problems since then. Most recent reports revealed that the Rover has been largely inactive, with no data being received from the Rover. The Opportunity Rover on the other hand continues to function successfully.
Windows Blue-Screens-Of-Death
When a problem in Windows forces the operation systems to fail, the computer often displays an error screen, know as Stop message, that describes the cause of the problem, most people called this a Blue Screen of Death (BSOD).
The error 0X0000001a, MEMORY_MANAGEMENT, occurs because of the race condition of memory management. It is a hardware error related to memory management. It is possible that the computer can not timely get enough power to the memory for the process.
The BSOD has surfaced on a number of Windows versions including Windows 7. It has also caused system failures in airports, ATM machines and street hoardings. However, the most notable public incident happened on the opening ceremony of the 2008 Beijing Summer Olympics in China, when one of the projectors crashed because of a BSOD bug.
Conclusions
The main challenge with race condition errors is that they're usually unpredictable and can be triggered in various ways depending on the processes involved, the implementation of software, the hardware design and the surrounding environment. However, the human element plays a huge part here as well, as far as applying the required amount of testing, anticipating possible schemes and coming up with different situations where an error might occur.
A handful of commercial software tools have been developed to address and detect race conditions errors. Most recently, a US software company that goes by the name of ReplaySolutions has been awarded a patent from the US government for developing an innovative kit for debugging race conditions found in software.
As the industry strives for faster and more efficient level of performance through the use of multi-processor systems and multi-core chips, this area continues to be a vast field for research and innovation within the computing world.
References
- Nancy Leveson. July 1993. Medical Devices: The Therac-25
- Nancy Leveson and Clark Turner. July 1993. An Investigation of the Therac-25 Accidents
- Anne Marie Porrello. July 1993. Death and Denial: The Failure of the THERAC-25, A Medical Linear Accelerator
- Reeves and Snyder. 10 January 2006. An Overview of the Mars Exploration Rovers' Flight Software. another source
- Matijevic and E. Dewell. 2006 Anomaly Recovery and the Mars Exploration Rovers
- Update: Spirit and Opportunity [1]
- It's Never Done That Before: A Guide to Troubleshooting Windows XP, John Ross, No Starch Press, 2006
- John Chan. 12 August 2008. Dreaded Blue Screen of Death strikes Olympics [2]
- Dr. Dobb's Journal. 9 June 2010. Patent Awarded for Debugging Race Conditions [3]