Talk:COMP 3000 Essay 2 2010 Question 1: Difference between revisions

From Soma-notes
Rannath (talk | contribs)
Rannath (talk | contribs)
Line 83: Line 83:


=====Exim: ''Section 5.2''=====
=====Exim: ''Section 5.2''=====
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - "there is little slowdown as long as Exim has enough clients to keep it busy."
Both the stock and new implementations use the same benchmarks, therefore internal fairness is preserved.


This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is fair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.
The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - "as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance."
 
This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.


=====memcached: ''Section 5.3''=====
=====memcached: ''Section 5.3''=====

Revision as of 20:35, 17 November 2010

Group members

Patrick Young Rannath@gmail.com

Methodology

We should probably have our work verified by at least one group member before posting to the actual page

To Do

  • Improve the grammar/structure of the paper section
  • Background Concepts -fill in info (fii)
  • Research problem -fii
  • Contribution -fii
  • Critique -fii
  • References -fii

Claim Sections

  • I claim Exim and memcached for background and critique -Rannath
  • also per-core data structures, false sharing and unessesary locking for contribution -Rannath

Essay

Paper

The paper's title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.

Authors in order presented: Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich

affiliation: MIT CSAIL

An Analysis of Linux Scalability to Many Cores

Background Concepts

Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.

Ideas to explain:

  1. thread (maybe)
  2. Linux's move towards scalability precedes this paper. (assert this, no explanation needed, maybe a few examples)
  3. Summarize scalability tutorial (Section 4.1 of the paper)
  4. Describe the programs tested (what they do, how they're programmed (serial vs parallel), where to the do their processing)
Exim: Section 3.1

Exim is a mail server for Unix. It's fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.

memchached: Section 3.2

memcached is an in-memory hash table. memchached is very much not parallel, but can be made to be, just run multiple instances. Have clients worry about synchronizing data between the different instances. With few requests memcached does most of its processing at the network stack, 80% of its time on one core.

Research problem

What is the research problem being addressed by the paper? How does this problem relate to past related work?

Problem being addressed: scalability of current generation OS architecture, using Linux as an example. (?)

Summarize related works (Section 2, include links, expand information to have at least a summary of some related work)

Contribution

What was implemented? Why is it any better than what came before?

Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn't unethical/illegal)?

Conclusion: we can make a traditional OS architecture scale (at least to 48 cores), we just have to remove bottlenecks.

Per-Core Data Structures

Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.

Eliminating false sharing

Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.

Avoiding unnecessary locking

Many locks/mutexes have special cases where they don't need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.

Critique

What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.

Since this is a "my implementation is better then your implementation" paper the "goodness" of content can be impartially determined by its fairness and the honesty of the authors.

Fairness criterion:

  1. does the test accurately describe real-world use-cases (or some set there-of)? (external fairness, maybe ignored for testing and benchmarking purposes, usually is too)
  2. does the test put all tested implementations through the same test? (internal fairness)

Style Criterion (feel free to add I have no idea what should go here):

  1. does the paper present information out of order?
  2. does the paper present needless information?
  3. does the paper have any sections that are inherently confusing?
Testing Method: Section 5(.1), to be removed, just an FYI

The same test was performed on both the original and new implementations for each program. This means that all tests were internally fair. All tests requiring IO operations were performed in an externally unfair manner, but this was explicitly stated.

Exim: Section 5.2

Both the stock and new implementations use the same benchmarks, therefore internal fairness is preserved.

The test uses a relatively small number of connections, but that is also implicitly stated to be a non-issue - "as long as there are enough clients to keep Exim busy, the number of clients has little effect on performance."

This test is explicitly stated to be ignoring the real-world constraint of the IO bottleneck, thus is unfair when compared to real-world scenarios. The purpose was not to test the IO bottleneck. Therefore the unfairness to real-world scenarios is unimportant.

memcached: Section 5.3

memcached has no explicit or implicit fairness concerns with respect to real-world scenarios.

References

You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.