COMP 3000 Essay 2 2010 Question 6: Difference between revisions
No edit summary |
|||
Line 22: | Line 22: | ||
=Research problem= | =Research problem= | ||
What is the research problem being addressed by the paper? How does this problem relate to past related work? | What is the research problem being addressed by the paper? How does this problem relate to past related work? | ||
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all. | |||
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel. | |||
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement. | |||
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand. | |||
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead. | |||
Revision as of 02:30, 2 December 2010
Paper
Effective Data-Race Detection for the Kernel
Paper: http://www.usenix.org/events/osdi10/tech/full_papers/Erickson.pdf
Video: http://homeostasis.scs.carleton.ca/osdi/video/erickson.mp4
Authors: John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, Kirk Olynyk from Microsoft Research
Background Concepts
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.
A data race is a potentially catastrophic event which can be alarmingly common in modern concurrent systems. When one thread attempts to read or write on a memory location at the same time that another thread is writing on the same location, there exists a potential data race condition. If the race is not handled properly, it could have a wide range of negative consequences. In the best case, there might be data corruption rendering the affected files unreadable and useless; this may not be a major problem if there exist archived, non-corrupted versions of the data. In the worst case, a process (possibly even the operating system itself) may freak out and crash, unable to decide what to do about the unexpected input it receives.
Traditional data-race detection programs operate by running an isolated runtime and comparing it with the currently active runtime, to find situations that would have resulted in a data race if the runtimes were not isolated. DataCollider operates by temporarily setting up breakpoints at random memory access instances. If a certain memory access hits a breakpoint, DataCollider springs into action. The breakpoint causes the memory access instruction to be postponed, and so the instruction pretty much goes to sleep until DataCollider has finished its job. The job is like taking a before and after photograph of something; DataCollider records the data stored at the address the instruction was attempting to access, then allows the instruction to execute. Then DataCollider records the data again. If the before and after records do not match, then another thread has tampered with the data at the same time that this instruction was trying to read it; this is precisely the definition of a data race.
[Don't worry guys; that's not all I've got. I'm still working on it.]
--Austin Bondio 01:56, 2 December 2010 (UTC)
Research problem
What is the research problem being addressed by the paper? How does this problem relate to past related work?
The research problem being addressed by this paper is the detection of erroneous data races inside the kernel without creating much overhead. This problem occurs because read/write access instructions in processes are not always atomic (e.g two read/write commands may happen simultaneously). There are so many ways a data race error may occur that it is very hard to catch them all.
The research team’s program DataCollider needs to detect errors between the hardware and kernel as well as errors in context thread synchronization in the kernel which must synchronize between user-mode processes, interrupts and deferred procedure calls. As shown in the Background Concepts section, this error can create unwanted problems in kernel modules. The research group created DataCollider which puts breakpoints in memory accesses to check if two system calls are calling the same piece of memory. There have been attempts at a solution to this problem in the past that ran in user-mode, but not in kernel mode, and they produced excessive overhead. There are many problems with trying to apply these techniques to a kernel.
One technique that some detectors in the past have used is the “happens before” method. This checks whether one access happened before another or if the other happened first, and if neither of those options were the case, the two accesses were done simultaneously. This method gathers true data race errors but is very hard to implement.
Another method used is the “lock-set” approach. This method checks all of the locks that are held currently by a thread, and if all the accesses do not have at least one common lock, the method sends a warning. This method has many false alarms since many variables nowadays are shared using other ways than locks or have very complex locking systems that lockset cannot understand.
Both these methods produce excessive overhead due to the fact that they have to check every single memory call at runtime. In the next section we will discuss how DataCollider uses a new way to check for data race errors, that produces barely any overhead.
Just a few rough notes:
Research problem / challenges for traditional detectors:
- data-race detectors run in user mode, whereas operating systems run kernel mode (supervisor mode).
- There are a lot of different synchronization methods, and a lot of ways to implement them. So it's nearly impossible to try and code a program that can catch all of them.
- Some kernel modules can "speak privately" with hardware components, so you can't make a program that just logs all the kernel's interactions.
- traditional data race detectors incur massive time overheads because they have to keep an eye on every single memory transaction that occurs at runtime.
--Austin Bondio 01:57, 2 December 2010 (UTC)
Contribution
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)
Critique
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.
Style
This paper is well put together. It has a strong flow and there is nothing that seems out of place. The authors start with an introduction and then immediately identify key definitions that are used throughout the paper. In the second section which follows the introduction the authors identify the definition of a Data-Race as it relates to their paper. This is important since it is a key concept that is required to understand the entire paper. This definition is required because as the authors state there is no standard for exactly how to define a data-race.[1] In addition to important definitions any background information that is relevant to this paper is presented at the beginning. The key idea which the paper is based on in this case Data Collider and its implementation is explained. An evaluation and conclusion of Data Collider follow its description. The order of the sections makes sense and the author is not jumping around from one concept to another. The organization of the sections and information provided make the paper easy to follow and understand.
Content
Data Collider:
References
[1] Erickson, Musuvathi, Burchhardt, Olynyk, Effective Data-Race Detection for the Kernel, Microsoft Research, 2010.PDF