Difference between revisions of "DistOS-2011W General Purpose Frameworks for Performance-Portable Code"

From Soma-notes
Jump to navigation Jump to search
Line 4: Line 4:


Many problems within the world of linear algebra have auto-tuned libraries which achieve near-optimal performance on a wide range of machines. One such system is ATLAS [9], which is presented here as an example. The problem with these libraries are that they are not general enough to service all of the high performance computing needs of the scientific community and the optimizations they employ are too specific to be applied to other problems. To combat this, a new area of general purpose frameworks for performance-portable code is emerging. The purpose of this literature review is to give the reader an overview of the most popular such frameworks that exist today, give an idea of how they work and briefly discuss their overall performance.
Many problems within the world of linear algebra have auto-tuned libraries which achieve near-optimal performance on a wide range of machines. One such system is ATLAS [9], which is presented here as an example. The problem with these libraries are that they are not general enough to service all of the high performance computing needs of the scientific community and the optimizations they employ are too specific to be applied to other problems. To combat this, a new area of general purpose frameworks for performance-portable code is emerging. The purpose of this literature review is to give the reader an overview of the most popular such frameworks that exist today, give an idea of how they work and briefly discuss their overall performance.
=Motivation=
The main motivation behind this work was presented in the The Landscape of Parallel Computing Research: A View from Berkeley [2]. This document discusses the recent switch of the computer hardware world to multicore processors and the drastic changes that need to be implemented in the software world to be able to adapt to the next generation of computers. The scope of this paper is much more than can be covered here, so the only important aspects that motivate our later discussion will be looked at.
Moore's law describes a trend in computer hardware where the number of transistors that could affordably be put on a chip doubled every 18 months to two years. In terms of processors, this would mean that one could expect faster and faster sequential computers to come out on a regular basis. This lead to the conventional wisdom that there was no real need for major optimizations or to parallelize sequential programs because they would simply run faster once a new generation of processors came out. However, due to many factors, the performance of individual processors have not been meeting the exponential improvements that they have in the past. In spite of this, Moore's law persists. Instead of seeing faster and faster individual cores, we now see the number of processor cores on a chip going up. It is currently not uncommon for a personal computer to have 2, 4 or even 8 processor cores. As this trend continues we can expect chips with thousands of individual cores on a single chip in the not so distant future. On top of this, it is not hard to imagine that the physical makeup of individual machines will vary significantly and possibly be made up of heterogeneous smaller computational components as well.
This obviously introduces some major problems into the world of computer software. Software vendors can no longer rely on faster individual cores to improve the performance of their sequential programs. Many already parallelized programs will not be able to scale as the number of cores increase dramatically, meaning they will effectively reach a performance barrier as well. Another equally, if not more urgent problem is the issue of code performance portability. Currently, we write code and compile it to be efficient on a given machine. When this code needs to be run on a different machine, it is possible that significant changes must be made to the code and compilation process to get near optimal performance. Obviously this is undesirable as it demands a significant amount of time to adjust and tune the code by hand by highly trained experts. As the variety and complexity of heterogeneous systems increase, the time needed to hand tune code for specific machines becomes unmanageable. This is a key concern within distributed systems as overall performance is largely influenced by how quickly individual machines can execute code. If only a small number of machines have optimal code, then either the operating system will always try to use these machines for specific tasks, or it will run the tasks on machines with suboptimal implementations of the given code. Either way this results in poor overall performance.
One potential solution to these problems are new programming models and auto-tuning frameworks. Programming models need to allow the programmer to balance their productivity with implementation efficiently. One key way to do this is to abstract the underlying architecture from the programmer. This will allow the programmer to worry about implementing their program and leave the low level details to the system. However, regardless of the programming model used, the overall performance will be determined by the quality of the generated code, which  is currently done by the compiler. The compiler is responsible for the optimization selection and then choosing the parameters for these optimizations as well as choosing between different alternative implementations of library kernels. This search space is often too large for a compiler to be able to make all of the right choices all of the time on every different computer architecture. Also, it is very difficult to add new optimizations into existing compilers and is often not worth the time needed to implement them. Because of this, most optimal solutions have to be manually tuned for each different system, taking a significant human investment. This is where auto-tuners step in. An auto-tuned algorithm is one that effectively searches through the optimization search space (either at compile time or run time) to produce efficient implementations on a wide array of hardware architectures. Auto-tuners have become popular in the last few years to produce highly portable, efficient scientific code. Many of these auto-tuners do this by generating many different algorithm variations of a given problem and then benchmark each one on a specific machine. The search also tries many combinations of the optimizations available through the compiler. This is a lengthy process, sometimes lasting several hours, but it only needs to be performed once at install time. The resulting implementation is often significantly faster than a naive implementations and sometimes even produces code faster than vendor supplied libraries. This is accomplished because the auto-tuner searches the optimization space in ways that a person would not, often finding unusual and unexpected optimization combinations. In the next section, an example of an auto-tuner implementing linear algebra libraries will be presented.


=References=
=References=

Revision as of 22:06, 28 February 2011

Introduction

The overall computer industry has changed drastically in the last 5-10 years. With multicore computing we no longer see the pattern of single chips with increasing clock speeds being released, we see the number of cores on a given chip increase as time goes on. In spite of this, Moore's law persists which lets us expect machines with hundreds if not thousands of individual cores in the not so distant future. Because of this we can also expect the physical makeup of individual machines to vary significantly and possibly be made up of heterogeneous smaller computational components. As described in the Berkeley View [2] this poses an enormous problem to the performance portability of code, as performance and portability are classically conflicting objectives when it comes to software generation. This is obviously a major concern in distributed computing as well since it is essential to have efficient code throughout the system, regardless of what physical architecture it resides on. For this reason, having code that can 'auto-tune' itself to achieve near optimal performance on any given hardware architecture will be a key issue moving forward within the distributed computing community and will be absolutely essential if the dream of a true distributed operating system running at internet scale is ever to be realized.

Many problems within the world of linear algebra have auto-tuned libraries which achieve near-optimal performance on a wide range of machines. One such system is ATLAS [9], which is presented here as an example. The problem with these libraries are that they are not general enough to service all of the high performance computing needs of the scientific community and the optimizations they employ are too specific to be applied to other problems. To combat this, a new area of general purpose frameworks for performance-portable code is emerging. The purpose of this literature review is to give the reader an overview of the most popular such frameworks that exist today, give an idea of how they work and briefly discuss their overall performance.

Motivation

The main motivation behind this work was presented in the The Landscape of Parallel Computing Research: A View from Berkeley [2]. This document discusses the recent switch of the computer hardware world to multicore processors and the drastic changes that need to be implemented in the software world to be able to adapt to the next generation of computers. The scope of this paper is much more than can be covered here, so the only important aspects that motivate our later discussion will be looked at.

Moore's law describes a trend in computer hardware where the number of transistors that could affordably be put on a chip doubled every 18 months to two years. In terms of processors, this would mean that one could expect faster and faster sequential computers to come out on a regular basis. This lead to the conventional wisdom that there was no real need for major optimizations or to parallelize sequential programs because they would simply run faster once a new generation of processors came out. However, due to many factors, the performance of individual processors have not been meeting the exponential improvements that they have in the past. In spite of this, Moore's law persists. Instead of seeing faster and faster individual cores, we now see the number of processor cores on a chip going up. It is currently not uncommon for a personal computer to have 2, 4 or even 8 processor cores. As this trend continues we can expect chips with thousands of individual cores on a single chip in the not so distant future. On top of this, it is not hard to imagine that the physical makeup of individual machines will vary significantly and possibly be made up of heterogeneous smaller computational components as well.

This obviously introduces some major problems into the world of computer software. Software vendors can no longer rely on faster individual cores to improve the performance of their sequential programs. Many already parallelized programs will not be able to scale as the number of cores increase dramatically, meaning they will effectively reach a performance barrier as well. Another equally, if not more urgent problem is the issue of code performance portability. Currently, we write code and compile it to be efficient on a given machine. When this code needs to be run on a different machine, it is possible that significant changes must be made to the code and compilation process to get near optimal performance. Obviously this is undesirable as it demands a significant amount of time to adjust and tune the code by hand by highly trained experts. As the variety and complexity of heterogeneous systems increase, the time needed to hand tune code for specific machines becomes unmanageable. This is a key concern within distributed systems as overall performance is largely influenced by how quickly individual machines can execute code. If only a small number of machines have optimal code, then either the operating system will always try to use these machines for specific tasks, or it will run the tasks on machines with suboptimal implementations of the given code. Either way this results in poor overall performance.

One potential solution to these problems are new programming models and auto-tuning frameworks. Programming models need to allow the programmer to balance their productivity with implementation efficiently. One key way to do this is to abstract the underlying architecture from the programmer. This will allow the programmer to worry about implementing their program and leave the low level details to the system. However, regardless of the programming model used, the overall performance will be determined by the quality of the generated code, which is currently done by the compiler. The compiler is responsible for the optimization selection and then choosing the parameters for these optimizations as well as choosing between different alternative implementations of library kernels. This search space is often too large for a compiler to be able to make all of the right choices all of the time on every different computer architecture. Also, it is very difficult to add new optimizations into existing compilers and is often not worth the time needed to implement them. Because of this, most optimal solutions have to be manually tuned for each different system, taking a significant human investment. This is where auto-tuners step in. An auto-tuned algorithm is one that effectively searches through the optimization search space (either at compile time or run time) to produce efficient implementations on a wide array of hardware architectures. Auto-tuners have become popular in the last few years to produce highly portable, efficient scientific code. Many of these auto-tuners do this by generating many different algorithm variations of a given problem and then benchmark each one on a specific machine. The search also tries many combinations of the optimizations available through the compiler. This is a lengthy process, sometimes lasting several hours, but it only needs to be performed once at install time. The resulting implementation is often significantly faster than a naive implementations and sometimes even produces code faster than vendor supplied libraries. This is accomplished because the auto-tuner searches the optimization space in ways that a person would not, often finding unusual and unexpected optimization combinations. In the next section, an example of an auto-tuner implementing linear algebra libraries will be presented.

References

[1] Jason Ansel, Cy Chan, Yee Lok Wong, Marek Olszewski, Qin Zhao, Alan Edelman, and Saman Amarasinghe. Petabricks: a language and compiler for algorithmic choice. In Proceedings of the 2009 ACM SIGPLAN conference on Programming language design and implementation, PLDI ’09, pages 38–49, New York, NY, USA, 2009. ACM.

[2] Krste Asanovic, Ras Bodik, Bryan Christopher Catanzaro, Joseph James Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William Lester Plishker, John Shalf, Samuel Webb Williams, and Katherine A. Yelick. The landscape of parallel computing research: A view from berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Dec 2006.

[3] Jee W. Choi, Amik Singh, and Richard W. Vuduc. Model-driven autotuning of sparse matrix-vector multiply on gpus. In Proceedings of the 15th ACM SIGPLAN symposium on Principles and practice of parallel programming, PPoPP ’10, pages 115–126, New York, NY, USA, 2010. ACM.

[4] James Demmel, Jack Dongarra, Viktor Eijkhout, Erika Fuentes, Antoine Petitet, Richard Vuduc, R. Clint Whaley, and Katherine Yelick. Self-adapting linear algebra algorithms and software. Proc. IEEE, 93(2):293–312, February 2005.

[5] Nathan Thomas, Gabriel Tanase, Olga Tkachyshyn, Jack Perdue, Nancy M. Amato, and Lawrence Rauchwerger. A framework for adaptive algorithm selection in stapl. In Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, PPoPP ’05, pages 277–288, New York, NY, USA, 2005. ACM.

[6] A. Tiwari, Chun Chen, J. Chame, M. Hall, and J.K. Hollingsworth. A scalable auto-tuning framework for compiler optimization. In Parallel Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on, pages 1 –12, May 2009.

[7] Sundaresan Venkatasubramanian, Richard W. Vuduc. Tuned and wildly asynchronous stencil kernels for hybrid cpu/gpu systems. In Proceedings of the 23rd international conference on Supercomputing, ICS ’09, pages 244–255, New York, NY, USA, 2009. ACM.

[8] Michael J. Voss and Rudolf Eigemann. High-level adaptive program optimization with adapt. SIGPLAN Not., 36:93–102, June 2001.

[9] Clint Whaley, Antoine Petitet, and Jack J. Dongarra. Automated Empirical Optimization of Software and the ATLAS Project. Parallel Computing, 27, 2000.

[10] Qing Yi and R. Clint Whaley. Automated transformation for performance-critical kernels. In Proceedings of the 2007 Symposium on Library-Centric Software Design, LCSD ’07, pages 109–119, New York, NY, USA, 2007. ACM.