Difference between revisions of "Talk:COMP 3000 Essay 2 2010 Question 1"

From Soma-notes
Jump to navigation Jump to search
Line 68: Line 68:
[2] http://www.gnu.org/software/make/manual/make.html
[2] http://www.gnu.org/software/make/manual/make.html


==Research problem - Making pretty! [[Dan B.]]==
==Research problem - DONE!!!==
  my references are just below because it is easier for numbering the data later.
 
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system<sup>[[#Foot1|1]]</sup>. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.
As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system<sup>[[#Foot1|1]]</sup>. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem.  The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.


To fix those scalability issues it is necessary to focus on three major areas: the Linux kernel, user level design and how application use of kernel services. The Linux kernel can be improved be to improve sharing and have the advantage of recent iterations are beginning to implement scalability features. On the user level design applications can be improved so that there is more focus on parallelism since some programs have not implements those improved features. The final aspect of improving scalability is how an application uses kernel services to share resources better so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found actually only take a little work to avoid.[1]
To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.<sup>[[#Foot1|1]]</sup>
 
 
This research is based on much research which was created before in the development of scalability for UNIX system.  The major developments from shared memory machines [2], wait-free synchronization to fast message passing have created a base set of techniques which can be used to improve scalability. These techniques have been incorporated in all major operation system including Linux, Mac OS X and Windows.  Linux has been improved with kernel subsystems such as Read-Copy-Update which an algorithm for which is used to avoid locks and atomic instructions which lower scalability.[3] The is also an excellent base a research on Linux scalability studies to base this research paper. These paper include a on doing scalability on a 32-core machine. [4] That research can improve the results by learning from the experiments already performed by researchers. This research also aid identifying bottlenecks which speed up researching solutions for those bottlenecks.
 
 
[2] J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH multiprocessor. In Proc. of the 21st ISCA, pages 302–313,1994.
 
[3] P. E. McKenney, D. Sarma, A. Arcangeli, A. Kleen, O. Krieger, and R. Russell. Read-copy-update.  In Proceedings of the Linux Symposium 2002, pages 338-367, Ottawa Ontario, June 2002


[4] C. Yan, Y. Chen, and S. Yuanchun. OSMark: A benchmark suite for understanding parallel scalability of operating systems on large scale multi-cores. In 2009 2nd International Conference on Computer Science and Information Technology, pages 313–317, 2009
This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines<sup>[[#Foot2|2]]</sup> and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.<sup>[[#Foot3|3]]</sup> There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.<sup>[[#Foot4|4]]</sup> In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.


==Contribution==
==Contribution==

Revision as of 14:35, 2 December 2010

Class and Notices

(Nov. 30, 2010) Prof. Anil stated that we should focus on the 3 easiest to understand parts in section 5 and elaborate on them.

- Also, I, Daniel B., work Thursday night, so I will be finishing up as much of my part as I can for the essay before Thursday morning's class, then maybe we can all meet up in a lab in HP and put the finishing touches on the essay. I will be available online Wednesday night from about 6:30pm onwards and will be in the game dev lab or CCSS lounge Wednesday morning from about 11am to 2pm if anyone would like to meet up with me at those times.

- I suggest we meet up Thursday morning after Operating Systems in order to discuss and finalize the essay. Maybe we can even designate a lab for the group to meet up in. Any suggestions? - Daniel B.

- HP 3115 since there wont be a class in there (as its our tutorial and we know there won't be anyone there) -- Go to Wireless Lab next to CCSS Lounge. Andrew and Dan B. will be there.

- If its all the same to you guys mind if I just join you via msn or iirc? Or phone if you really want. -Rannath

- I'm working today, but I'll be at a computer reading this page/contributing to my section. Depending on how busy I am, I should be able to get some significant writing in before 4pm today on my section and any additional sections required. RP


- I wont be there either. that does not mean i wont/cant contribute. I'll be on msn or you can just email me. -kirill

Group members

Patrick Young Rannath@gmail.com

Daniel Beimers demongyro@gmail.com

Andrew Bown abown2@connect.carleton.ca

Kirill Kashigin k.kashigin@gmail.com

Rovic Perdon rperdon@gmail.com

Daniel Sont dan.sont@gmail.com

Methodology

We should probably have our work verified by at least one group member before posting to the actual page

To Do

  • Improve the grammar/structure of the paper section & add links to supplementary info
  • flesh out the whole lot

Claim Sections

Essay

Paper - DONE!!!

This paper was authored by - Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich.

They all work at MIT CSAIL.

The paper: An Analysis of Linux Scalability to Many Cores

Background Concepts - DONE!!!

memcached: Section 3.2

memcached is an in-memory hash table server. One instance of memcached running on many different cores is bottlenecked by an internal lock, which is avoided by the MIT team by running one instance per core. Clients each connect to a single instance of memcached, allowing the server to simulate parallelism without needing to make major changes to the application or kernel. With few requests, memcached spends 80% of its time in the kernel on one core, mostly processing packets.[1]

Apache: Section 3.3

Apache is a web server that has been used in previous Linux scalability studies. In the case of this study, Apache has been configured to run a separate process on each core. Each process, in turn, has multiple threads (making it a perfect example of parallel programming). Each process uses one of their threads to accepting incoming connections and others are used to process these connections. On a single core processor, Apache spends 60% of its execution time in the kernel.[1]

gmake: Section 3.5

gmake is an unofficial default benchmark in the Linux community which is used in this paper to build the Linux kernel. gmake takes a file called a makefile and processes its recipes for the requisite files to determine how and when to remake or recompile code. With a simple command -j or --jobs, gmake can process many of these recipes in parallel. Since gmake creates more processes than cores, it can make proper use of multiple cores to process the recipes.[2] Since gmake involves much reading and writing, in order to prevent bottlenecks due to the filesystem or hardware, the test cases use an in-memory filesystem tmpfs, which gives them a backdoor around the bottlenecks for testing purposes. In addition to this, gmake is limited in scalability due to the serial processes that run at the beginning and end of its execution, which limits its scalability to a small degree. gmake spends much of its execution time with its compiler, processing the recipes and recompiling code, but still spend 7.6% of its time in system time.[1]

[2] http://www.gnu.org/software/make/manual/make.html

Research problem - DONE!!!

As technology progresses, the number of core a main processor can have is increasing at an impressive rate. Soon personal computers will have so many cores that scalability will be an issue. There has to be a way that standard user level Linux kernel will scale with a 48-core system1. The problem with a standard Linux OS is that they are not designed for massive scalability, which will soon prove to be a problem. The issue with scalability is that a solo core will perform much more work compared to a single core working with 47 other cores. Although traditional logic states that the situation makes sense because there are 48 cores dividing the work, the information should be processed as fast as possible with each core doing as much work as possible.

To fix those scalability issues, it is necessary to focus on three major areas: the Linux kernel, user level design and how applications use kernel services. The Linux kernel can be improved by optimizing sharing and use the current advantages of recent improvement to scalability features. On the user level design, applications can be improved so that there is more focus on parallelism since some programs have not implemented those improved features. The final aspect of improving scalability is how an application uses kernel services to better share resources so that different aspects of the program are not conflicting over the same services. All of the bottlenecks are found easily and actually only take simple changes to correct or avoid.1

This research uses a foundation of previous research discovered during the development of scalability in UNIX systems. The major developments from shared memory machines2 and wait-free synchronization to fast message passing ended up creating a base set of techniques, which can be used to improve scalability. These techniques have been incorporated in all major operating system including Linux, Mac OS X and Windows. Linux has been improved with kernel subsystems, such as Read-Copy-Update, which is an algorithm that is used to avoid locks and atomic instructions which affect scalability.3 There is an excellent base of research on Linux scalability studies that have already been written, on which this research paper can model its testing standards. These papers include research on improving scalability on a 32-core machine.4 In addition, the base of studies can be used to improve the results of these experiments by learning from the previous results. This research may also aid in identifying bottlenecks which speed up creating solutions for those problems.

Contribution

What was implemented? Why is it any better than what came before?
Summarize info from Section 4.2 onwards, maybe put graphs from Section 5 here to provide support for improvements (if that isn't unethical/illegal)?
 
- So long as we cite the paper and don't pretend the graphs are ours, we are ok, since we are writing an explanation/critic of the paper.

All contributions in this paper are the result of the identification and removal or marginalization of bottlenecks.

What hinders scalability: Section 4.1

  • The percentage of serialization in a program has a lot to do with how much an application can be sped up. This is Amdahl's Law
    • Amdahl's Law states that a parallel program can only be sped up by the inverse of the proportion of the program that cannot be made parallel (e.g. 25%(.25) non-parallel --> limit of 4x speedup) (I can't get this to sound right someone fix it please -Rannath <- I will fix Daniel B.
  • Types of serializing interactions found in the MOSBENCH apps:
    • Locking of shared data structure as the number of cores increase leads to an increase in lock wait time
    • Writing to shared memory as the number of cores increase leads to an increase in the execution time of the cache coherence protocol
    • Competing for space in shared hardware cache as the number of cores increase leads to an increase in cache miss rate
    • Competing for shared hardware resources as the number of cores increase leads to time lost waiting for resources
    • Not enough tasks for cores leads to idle cores

Multicore packet processing: Section 4.2

Linux packet processing technique requires the packets to travel along several queues before it finally becomes available for the application to use. This technique works well for most general socket applications. In recent kernel releases Linux takes advantage of multiple hardware queues (when available on the given network interface) or Receive Packet Steering[1] to direct packet flow onto different cores for processing. Or even go as far as directing packet flow to the core on which the application is running using Receive Flow Steering[2] for even better performance. Linux also attempts to increase performance using a sampling technique where it checks every 20th outgoing packet and directs flow based on its hash. This poses a problem for short lived connections like those associated with Apache since there is great potential for packets to be misdirected.

In general this technique performs poorly when there are numerous open connections spread across multiple cores due to mutex (mutual exclusion) delays and cache misses. In such scenarios its better to process all connections, with associated packets and queues, on one core to avoid said issues. The patched kernel's implementation proposed in this article uses multiple hardware queues (which can be accomplished through Receive Packet Sharing) to direct all packets from a given connection to the same core. In turn Apache is modified to only accept connections if the thread dedicated to processing them is on the same core. If the current core's queue is found to be empty it will attempt to obtain work from queues located on different cores. This configuration is ideal for numerous short connections as all the work for them in accomplished quickly on one core avoiding unnecessary mutex delays associated with packet queues and inter-core cache misses.


[1] J. Corbet. Receive Packet Steering, November 2009. http://lwn.net/Articles/362339/.

[2] J. Edge. Receive Flow Steering, April 2010. http://lwn.net/Articles/382428/.

Sloppy counters: Section 4.3

Bottlenecks were encountered when the applications undergoing testing were referencing and updating shared counters for multiple cores. The solution in the paper is to use sloppy counters, which gets each core to track its own separate counts of references and uses a central shared counter to keep all counts on track. This is ideal because each core updates its counts by modifying its per-core counter, usually only needing access to its own local cache, cutting down on waiting for locks or serialization. Sloppy counters are also backwards-compatible with existing shared-counter code, making its implementation much easier to accomplish. The main disadvantages of the sloppy counters are that in situations where object de-allocation occurs often, because the de-allocation itself is an expensive operation, and the counters use up space proportional to the number of cores.

Lock-free comparison: Section 4.4

This section describes a specific instance of unnecessary locking.

Per-Core Data Structures: Section 4.5

Three centralized data structures were causing bottlenecks - a per-superblock list of open files, vfsmount table, the packet buffers free list. Each data structure was decentralized to per-core versions of itself. In the case of vfsmount the central data structure was maintained, and any per-core misses got written from the central table to the per-core table.

Eliminating false sharing: Section 4.6

Misplaced variables on the cache cause different cores to request the same line to be read and written at the same time often enough to significantly impact performance. By moving the often written variable to another line the bottleneck was removed.

Avoiding unnecessary locking: Section 4.7

Many locks/mutexes have special cases where they don't need to lock. Likewise mutexes can be split from locking the whole data structure to locking a part of it. Both these changes remove or reduce bottlenecks.

Conclusion: Sections 6 & 7

Work in Progress

Rovic P.

-I'm just using this as a notepad, do not copy/paste this section, I will put in a properly written set of paragraphs which will fit with the contribution questions asked. -RP

This research contributes by evaluating the scalability discrepancies of applications programming and kernel programming. Key discoveries in this research show the effectiveness of the kernel in handling scaling amongst CPU cores. This has also shown that scaling in application programming should be more the focus. It has been shown that simple scaling techniques (list techniques) such as programming parallelism (look up more stuff to back this up and quotes). (Sloppy counter effectiveness, possible positive contributions, what has been used (internet search), what hasn’t been used.) Read conclusion, 2nd paragraph.

One reason the required changes are modest is that stock Linux already incorporates many modifications to improve scalability. More speculatively, perhaps it is the case that Linux’s system-call API is well suited to an implementation that avoids unnecessary contention over kernel objects.


Rannath

Everything so far indicates that the MOSBENCH applications can scale to 48 cores. This scaling required a few modest changes to remove bottlenecks. The MIT team speculate that that trend will continue as the number of cores increase. They also state that things not bottlenecked by the CPU are harder to fix.

We can eliminate most kernel bottlenecks that the applications hits most often with minor changes. Most changes were well known methodology, with the exception of Sloppy counters. This study is limited by the removal of the IO bottleneck, but it does suggest that traditional implementations can be made scalable.

Critique

What is good and not-so-good about this paper? You may discuss both the style and content;
be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.
Since this is a "my implementation is better then your implementation" paper the "goodness" of content can be impartially determined by its fairness and the honesty of the authors.

Content(Fairness): Section 5

memcached: Section 5.3

memcached is treated with near perfect fairness in the paper. Its an in-memory service, so the ignored storage IO bottleneck does not affect it at all. Likewise the "stock" and "PK" implementations are given the same test suite, so there is no advantage given to either. memcached itself is non-scalable, so the MIT team was forced to run one instance per-core to keep up throughput. The FAQ at memcached.org's wiki suggests using multiple implementations per-server as a work around to another problem, which implies that running multiple instances of the server is the same, or nearly the same, as running one larger server [3]. In the end memcached was bottlenecked by the network card.

Apache: Section 5.4

Linux has a built in kernel flaw where network packets are forced to travel though multiple queues before they arrive at queue where they can be processed by the application. This imposes significant costs on multi-core systems due to queue locking costs. This flaw inherently diminishes the performance of Apache on multi-core system due to multiple threads spread across cores being forced to deal with these mutex (mutual exclusion) costs. For the sake of this experiment Apache had a separate instance on every core listening on different ports which is not a practical real world application but merely an attempt to implement better parallel execution on a traditional kernel. The patched kernel implementation of the network stack is also specific to the problem at hand, which is processing multiple short lived connections across multiple cores. Although this provides a performance increase in the given scenario, in more general applications network performance might suffer. These tests were also rigged to avoid bottlenecks in place by network and file storage hardware. Meaning, making the proposed modifications to the kernel wont necessarily produce the same increase in productivity as described in the article. This is very much evident in the test where performance degrades past 36 cores due to limitation of the networking hardware.

Which is not a problem as the paper specifically states that they are testing what they can improve in spite of hardware limitation. - Rannath

gmake: Section 5.6

Since the inherent nature of gmake makes it quite parallel, the testing and updating that were attempted on gmake resulted in essentially the same scalability results for both the stock and modified kernel. The only change that was found was that gmake spent slightly less time at the system level because of the changes that were made to the system's caching. As stated in the paper, the execution time of gmake relies quite heavily on the compiler that is uses with gmake, so depending on which compiler was chosen, gmake could run worse or even slightly better. In any case, there seems to be no fairness concerns when it comes to the scalability testing of gmake as the same application load-out was used for all of the tests.

Conclusion: Sections 6 & 7

Given that all tests are more or less fair for the purposes of the benchmarks. They would support the Hypothesis that Linux can be made to scale, at least to 48 cores. Thus the conclusion is fair iff the rest of the paper is fair.

Now you just have to fill in how fair the rest of the paper is.

Style

Style Criterion (feel free to add I have no idea what should go here):
- does the paper present information out of order?
- does the paper present needless information?
- does the paper have any sections that are inherently confusing? Wrong?
- is the paper easy to read through, or does it change subjects repeatedly?
- does the paper have too many "long-winded" sentences, making it seem like the authors are just trying to add extra words to make it seem more important? - I think maybe limit this to run-on sentences.
- Check for grammar

Everything seems to be in logical order. I couldn't find any needless info. Nothing inherently confusing or wrong. Nothing bad on the grammar front either. - Rannath

Some acronyms aren't explained before they are used, so some people reading the paper may get confused as to what they mean (e.g. Linux TLB). Since this paper is meant to be formal, acronyms should be explained, with some exceptions like OS and IBM. - Daniel B.

Your example has no impact on the paper, it was in the "look here for more info" section. Most people wouldn't know what a "translation look-aside buffer" is either.

References

[3] memcached's wiki: http://code.google.com/p/memcached/wiki/FAQ#Can_I_use_different_size_caches_across_servers_and_will_memcache

You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.

the paper itself doesn't need to be referenced more than once as this is a critique of the paper... [1] Silas Boyd-Wickizer et al. "An Analysis of Linux Scalability to Many Cores". In OSDI '10, 9th USENIX Symposium on OS Design and Implementation, Vancouver, BC, Canada, 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Boyd-Wickizer.pdf.


gmake:

gmake Manual

gmake Main Page

Deprecated

Background Concepts

  • Exim: Section 3.1:
    • Exim is a mail server for Unix. It's fairly parallel. The server forks a new process for each connection and twice to deliver each message. It spends 69% of its time in the kernel on a single core.
  • PostgreSQL: Section 3.4:
    • As implied by the name PostgreSQL is a SQL database. PostgreSQL starts a separate process for each connection and uses kernel locking interfaces extensively to provide concurrent access to the database. Due to bottlenecks introduced in its code and in the kernel code, the amount of time PostgreSQL spends in the kernel increases very rapidly with addition of new cores. On a single core system PostgreSQL spends only 1.5% of its time in the kernel. On a 48 core system the execution time in the kernel jumps to 82%.