Talk:COMP 3000 Essay 2 2010 Question 1
Group members
Patrick Young Rannath@gmail.com
Methodology
We should probably have our work verified by at least one group member before posting to the actual page
To Do
- Improve the grammar/structure of the paper section
- Background Concepts -fill in info (fii)
- Research problem -fii
- Contribution -fii
- Critique -fii
- References -fii
Claim Sections
- I claim Exim and memcached for background and critique -Rannath
- also per-core data structures, false sharing and unessesary locking for contribution -Rannath
Essay
Paper
The paper's title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.
Authors in order presented: Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich
affiliation: MIT CSAIL
An Analysis of Linux Scalability to Many Cores
Background Concepts
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.
Ideas to explain:
- thread (maybe)
- Linux's move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)
- Summarize scalability tutorial (Section 4.1 of the paper)
- Describe the programs tested (what they do, how they're programmed (serial vs parallel), where to the do their processing)
Exim
Exim is a mail server for Unix. It's fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.
memchached
memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.
Research problem
What is the research problem being addressed by the paper? How does this problem relate to past related work?
Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)
Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)
Contribution
What was implemented? Why is it any better than what came before?
Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn't unethical/illegal)?
Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.
Per-Core Data Structures
Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.
Eliminating false sharing
Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.
Avoiding unnecessary locking
Many locks/mutexes have special cases where they don't need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.
Critique
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.
Since this is a "my implementation is better then your implementation" paper the "goodness" of content can be impartially determined by its fairness and the honesty of the authors.
Fairness criterion:
- does the test accurately describe real-world use-cases (or some set there-of)?
- does the test put all tested implementations through the same test? (or their implementation through a harder test)
Style Criterion (feel free to add I have no idea what should go here):
- does the paper present information out of order?
- does the paper present needless information?
- does the paper have any sections that are inherently confusing?
Testing Method
The same test was performed on both the original and new implementations for each program. This means that all tests were internally fair. All tests requiring IO operations were performed in an externally unfair manner, but this was explicitly stated in Section 5.1. (Reference entirety of section 5...)
Exim Section 5.2
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck. The test likewise uses a relatively small number of connections, but that is also implicitly stated to be a non-issue, there is little slowdown as long as Exim has enough clients to keep it busy.
This test is not fair when compared to real-world scenarios. However, the same unfairness is present in both the stock and new implementations. The purpose of the test was not to test the elements that were stripped out in any case. Therefore the unfairness to real-world scenarios is unimportant.
References
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.