Talk:COMP 3000 Essay 2 2010 Question 10: Difference between revisions
Line 62: | Line 62: | ||
==Research problem== | ==Research problem== | ||
: What is the research problem being addressed by the paper? How does this problem relate to past related work? | : What is the research problem being addressed by the paper? How does this problem relate to past related work? | ||
[old] | |||
We use today, a very primitive kind of IO resource allocation in modern hypervisors. Currently an algorithm called PARDA (Proportional Allocation of Resources in Distributed storage Access) 1 is used to allocate IO resources to each VM running on a particular storage device. Unfortunately, the IO resource allocation algorithm of the hosts use a fair-scheduler called SFQ (Start-time Fair Queuing) 2. What this means is that PARDA allocates IO resources to VMs proportional to the number of IO shares on the host, but each host uses a fair scheduler which divides the IO shares amongst the VMs equally. This leads to the problem that whenever another VM is added or another background application is run on one of the VMs, all the other VMs suffer a huge performance lose. In the case of adding another VM, there is a 40% performance drop. This is completely unacceptable when applications have minimum performance requirements to run effectively. An application with minimum resource requirements can be running fine on any given VM, but as soon as the load on the shared storage device increases, the application would run poorly, or could potentially crash. | |||
Related work | Related work | ||
- http://www.fortuitous.com/docs/whitepapers/Linux2.6-Perf-Intro.pdf | - http://www.fortuitous.com/docs/whitepapers/Linux2.6-Perf-Intro.pdf |
Revision as of 03:31, 2 December 2010
mClock: Handling Throughput Variability for Hypervisor IO Scheduling
Notes to Group
We might as well work directly on the main page
I think I've moved all the existing text to the main page, as things were being edited in 2 places. Hope that's okay. --Dagar 23:01, 30 November 2010 (UTC)
that's fine! hey guys are we good with the information that we gathered? i think we should find external links on mClock if possible. --npatel1
I can't any external links on mClock
Group Members
Please leave your name and email address if you are in the group
- Daniel Agar - dagar@scs.carleton.ca
- Xi Chen - xintai1985@gmail.com
- Niravkumar Patel - npatel1@connect.carleton.ca
- Tuan Pham - tpham3@connect.carleton.ca
- Aaron Leblanc - aellebla@connect.carleton.ca
- Nisrin Abou-Seido - naseido@connect.carleton.ca
Layout
Paper
Authors:
- Ajay Gulati VMware Inc. Palo Alto, CA, 94304 agulati@vmware.com
- Arif Merchant HP Labs Palo Alto, CA 94304 arif.merchant@acm.org
- Peter J. Varman Rice University Houston, TX, 77005 pjv@rice.edu
Link to the paper:
mClock: Handling Throughput Variability for Hypervisor IO Scheduling
Background Concepts
[old]
Hypervisors are responsible for multiplexing hardware resources between virtual machines while providing isolation to an extent, using resource management. The three controls used are reservation where the minimum bounds are set, the limit where the maximum upper bound on the allocation is set, and shares which proportionally allocate the resources according to the certain weight each VM has, and also depending on the reservation and upper bound limits. This is interesting because virtualization has been very successful; people are comfortable with putting multiple VM on one HOST without worrying about the performance of each VM on another. However the contention for I/O resources can suddenly lower a VM’s allocation; the available throughput can change with time, and adjustments to allocations must be made dynamically. mClock is a better alternate because it supports all controls in a single algorithm, handles variable and unknown capacity, and fast to compute. This is interesting because there is a limit control on VM allocation, it does not weaken as each VM gets added on, and mClock reservations are met.
more about mclock here
mClock is a resource-allocation algorithm that helps hypervisors manage I/O requests from multiple virtual machines simultaneously. Essentially, mClock dynamically adjusts the proportions of resources each VM receives based on how active each VM currently is. While mClock constantly changes the physical resource allocation to each VM, it lets each VM hold onto the illusion that it has full control of all system resources. As a result, performance can be increased for VMs that need it, without letting the others know that “their” resources are being distributed to other machines.
- Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper. Heres some notes i took from the paper. I did not want to put it on the main page because it is directly copied from the paper:
The intuitive idea behind the mClock algorithm is to logically interleave a constraint-based scheduler and a weight-based scheduler in a fine-grained manner. The constraint-based scheduler ensures that VMs receive at least their minimum reserved service and no more than the upper limit in a time interval, while the weight-based scheduler allocates the remaining throughput to achieve proportional sharing.
mClock uses two main ideas: multiple real-time clocks and dynamic clock selection.
Each VM IO request is assigned three tags, one for each clock: a reservation tag R, a limit tag L, and a proportional share tag P for weight-based allocation. Different clocks are used to keep track of each of the three controls, and tags based on one of the clocks are dynamically chosen to do the constraint-based or weight-based scheduling.
Research problem
- What is the research problem being addressed by the paper? How does this problem relate to past related work?
[old]
We use today, a very primitive kind of IO resource allocation in modern hypervisors. Currently an algorithm called PARDA (Proportional Allocation of Resources in Distributed storage Access) 1 is used to allocate IO resources to each VM running on a particular storage device. Unfortunately, the IO resource allocation algorithm of the hosts use a fair-scheduler called SFQ (Start-time Fair Queuing) 2. What this means is that PARDA allocates IO resources to VMs proportional to the number of IO shares on the host, but each host uses a fair scheduler which divides the IO shares amongst the VMs equally. This leads to the problem that whenever another VM is added or another background application is run on one of the VMs, all the other VMs suffer a huge performance lose. In the case of adding another VM, there is a 40% performance drop. This is completely unacceptable when applications have minimum performance requirements to run effectively. An application with minimum resource requirements can be running fine on any given VM, but as soon as the load on the shared storage device increases, the application would run poorly, or could potentially crash.
Related work - http://www.fortuitous.com/docs/whitepapers/Linux2.6-Perf-Intro.pdf -Linux schedulers (CFQ, SFQ)
Contribution
- What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)
[old]
The article introduces the mClock algorithm which handles multiple VM in a variable throughput environment. The Quality of Service (QoS) requirements for a VM are expressed as a minimum reservation, a maximum limit, and a proportional share. This algorithm, mClock, is able to meet those controls in varying capacity. The good thing about this is that the algorithm proves efficient in clustered architectures. Moreover, it provides greater isolation between VMs.
In this paper there were many terms that were used but never explained, such as orders (used in the graphs), LUN, PARDA, etc. Also, I did not like the way the calculations were written in sentences, "For a small reference IO size of 8KB and using typical values for mechanical delay Tm = 5ms and peak transfer rate, Bpeak = 60 MB/s, the numerator = Lat1*(1 + 8/300) ≈ Lat1". To me this was very messy and made me skip through the calculations part of the sentence.<math></math>
- Storage IO allocation is hard
- mClockcontributions
•Supports reservation, limit and shares in one place •Handles variable IO performance seen by hypervisor •Can be used for other resources such as CPU, memory & Network IO allocation as well
- Future work
•Better estimation of reservation capacity in terms of IOPS •Add priority control along with RLS •Mechanisms to set R, L,S and other controls to meet application-level goals
Critique
- What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.
[old]
This paper addresses the current limitations of IO resource allocation for hypervisors. The paper has proposed a new and more efficient algorithm to allocate IO resources. Older methods were limited solely by providing proportional shares. mClock incorporates proportional shares, as well as a minimum reservation of IO resources, and a maximum reservation.
Older methods of IO resource allocation had a terrible performance lose. Whenever the load on the shared storage device was increased, or when another VM was added, the performance of all hosts would drop considerably. Older methods provided unreliable IO management of hypervisors.
mClock was able to present VMs with a guaranteed minimum reservation of IO resources. This means that application performance will never drop below a certain point. This provides much better application stability on each of the VMs, and better overall performance.
"dmClock (used for cluster-based storage systems) runs a modified version of mClock at each server. There is only one modification to the algorithm to account for the distributed model in the Tag-Assignment component." - from the paper
- Tuan,i think the term PARDA was explained in the article. It stands for Proportional Allocation of Resources in Distributed storage Access. It was basically a priority queue for the storage devices and VMs.--Aaron Leblanc 22:47, 30 November 2010 (UTC)
- I see, I'll have to read it over again