DistOS 2014W Lecture 16: Difference between revisions
m Typos |
36chambers (talk | contribs) |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Public Resource Computing | Public Resource Computing | ||
== Outline for upcoming lectures == | == Outline for upcoming lectures == | ||
Line 15: | Line 14: | ||
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report. | Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report. | ||
== Public Resource Computing | |||
==Public Resource Computing (March 11)== | |||
* Anderson et al., "SETI@home: An Experiment in Public-Resource Computing" (CACM 2002) [http://dx.doi.org/10.1145/581571.581573 (DOI)] [http://dl.acm.org.proxy.library.carleton.ca/citation.cfm?id=581573 (Proxy)] | |||
* Anderson, "BOINC: A System for Public-Resource Computing and Storage" (Grid Computing 2004) [http://dx.doi.org/10.1109/GRID.2004.14 (DOI)] [http://ieeexplore.ieee.org.proxy.library.carleton.ca/stamp/stamp.jsp?tp=&arnumber=1382809 (Proxy)] | |||
=== Keywords === | |||
BOINC & SETI@Home: Lowered entry barriers, master/slave relationship, work units, [Embarrassingly_parallel http://en.wikipedia.org/wiki/Embarrassingly_parallel], inverted use-cases, gamification, redundant computing, consentual bot nets, centralized authority, untrusted clients, replication as reliability, exponential backoff, limited server reliability engineering. | |||
Embarrassingly parallel: ease of parallization, communication to computation ratio, discrete work units, abstractions help, map reduce. | |||
=== Introduction === | |||
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following: | The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following: | ||
Line 22: | Line 33: | ||
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. | The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. | ||
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutions, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institutiton (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people's machines to utilize the processing power of their machine. | For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. | ||
In the past, it has been institutions, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institutiton (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people's machines to utilize the processing power of their machine. | |||
Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server. | Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server. | ||
Important to the design of the BOINC platform is that it was easily deployed by scientists (ie. non IT specialists). It was meant to lower the entry barrier for the types of scientific computing that lent itself to being embarrassingly parallel. The platform used a simple design with commodity software (PHP, Python, MySQL). | |||
For fault tolerance, such as malicious clients or faulty processors, redundant computing is done. Work units are processed multiple times. | For fault tolerance, such as malicious clients or faulty processors, redundant computing is done. Work units are processed multiple times. | ||
Line 32: | Line 47: | ||
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of '''m''', though. | It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of '''m''', though. | ||
=== | In the case of SETI@Home, the amount of available work units is fixed. The system scales by increasing the amount of redundant computing. If more clients join the system, they just end up getting the same work units. | ||
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want? | |||
=== Comparison to Botnets === | |||
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. | |||
The answer: consent. | |||
You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want? You can't. You can only verify the results. | |||
=== Comparisons === | === General Comparisons === | ||
Basic Comparison with other File systems , we have covered so far - | Basic Comparison with other File systems, we have covered so far - | ||
# | # Inverted use cases. In the files systems we have covered so far, clients would want access to the files stored in a network system, here a system wants to access clients' machines to utilize the processing power of their machine. There is an inverted flow. | ||
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client's storage but that is not the public resource computing's main focus. | # In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client's storage but that is not the public resource computing's main focus. | ||
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relationship between the centralized server and the clients installed across users' machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server. | # It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relationship between the centralized server and the clients installed across users' machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server. | ||
Line 46: | Line 67: | ||
=== Trust Model and Fault Tolerance === | === Trust Model and Fault Tolerance === | ||
In this central model, you have a central resource and distribute work to clients who | In this central model, you have a central resource and distribute work to clients who process the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers and bad results. | ||
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. | So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. | ||
Line 66: | Line 87: | ||
SETI is an example of an "embarrassingly parallel" workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called "embarrassingly parallel" because there is little to no effort required to distribute the work load in parallel. | SETI is an example of an "embarrassingly parallel" workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called "embarrassingly parallel" because there is little to no effort required to distribute the work load in parallel. | ||
One more example of "embarrassingly parallel" in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far | One more example of "embarrassingly parallel" in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far that doesn't trust the clients can be modeled to work as a public sharing system. | ||
Note: Public resource computing is also very similar to | Note: Public resource computing is also very similar to MapReduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this. |
Latest revision as of 17:39, 24 April 2014
Public Resource Computing
Outline for upcoming lectures
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better. Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.
Project proposal
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10). Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday. Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.
Public Resource Computing (March 11)
- Anderson et al., "SETI@home: An Experiment in Public-Resource Computing" (CACM 2002) (DOI) (Proxy)
- Anderson, "BOINC: A System for Public-Resource Computing and Storage" (Grid Computing 2004) (DOI) (Proxy)
Keywords
BOINC & SETI@Home: Lowered entry barriers, master/slave relationship, work units, [Embarrassingly_parallel http://en.wikipedia.org/wiki/Embarrassingly_parallel], inverted use-cases, gamification, redundant computing, consentual bot nets, centralized authority, untrusted clients, replication as reliability, exponential backoff, limited server reliability engineering.
Embarrassingly parallel: ease of parallization, communication to computation ratio, discrete work units, abstractions help, map reduce.
Introduction
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following: What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources.
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits.
In the past, it has been institutions, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institutiton (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people's machines to utilize the processing power of their machine.
Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.
Important to the design of the BOINC platform is that it was easily deployed by scientists (ie. non IT specialists). It was meant to lower the entry barrier for the types of scientific computing that lent itself to being embarrassingly parallel. The platform used a simple design with commodity software (PHP, Python, MySQL).
For fault tolerance, such as malicious clients or faulty processors, redundant computing is done. Work units are processed multiple times. Work units are later taken off of the clients as dictated by the following two cases:
- They receive the number of results, n, for a certain work unit, they take the answer that the majority gave.
- They have transmitted a work unit m times and have not gotten back the n expected responses.
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of m, though.
In the case of SETI@Home, the amount of available work units is fixed. The system scales by increasing the amount of redundant computing. If more clients join the system, they just end up getting the same work units.
Comparison to Botnets
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task.
The answer: consent.
You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want? You can't. You can only verify the results.
General Comparisons
Basic Comparison with other File systems, we have covered so far -
- Inverted use cases. In the files systems we have covered so far, clients would want access to the files stored in a network system, here a system wants to access clients' machines to utilize the processing power of their machine. There is an inverted flow.
- In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client's storage but that is not the public resource computing's main focus.
- It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relationship between the centralized server and the clients installed across users' machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.
- Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network
- Skype was modelled much like a public resource computing network (before Microsoft took over). The whole model of Skype was that the infrastructure just ran on the computers of those who had downloaded the clients (like a consensual botnet). Once a person downloaded the client, they would be a part of this system. As with public resource computing, you would donate some of your resources in order to support the distributed infrastructure. It was also not assumed that everyone was reliable, but would assume that some people are reliable some of the time. The network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS' takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.
Trust Model and Fault Tolerance
In this central model, you have a central resource and distribute work to clients who process the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers and bad results.
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective.
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server's own inadequacies. The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.
It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is.
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it's the university using the service and clients are helping out by providing resources. If the service goes down, it is the university's fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.
Embarrassingly Parallell
When you are doing parallel computations, you have to do a mixture of computation and communication. You're doing computation separately, but you always have to do some communication. But, how much communication do you have to do for every unit of computation? In some cases, there are many dependencies meaning that a high amount of communication is required (e.g., weather system simulations).
Embarrassingly parallel means that a given problem requires a minimum of communication between the pieces of work. This typically means that you have a bunch of data that you want to analyze, and it's all independent. Due to this, you can just split up and distribute the work for analysis. In an embarrassingly parallel problem, computations are trivial, due to the minimum of communication, as the more processors you add, the faster the system will run. However, problems that are not embarrassingly parallel, the system can actually slow down when more processors are added as more communication is required. With distributed systems, you either need to accept communications costs or modify abstractions to allow you to get closer to an embarrassingly parallel system. Since speedup is trivial when the problem is embarrassingly parallel, you don't get much praise for doing it.
SETI is an example of an "embarrassingly parallel" workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called "embarrassingly parallel" because there is little to no effort required to distribute the work load in parallel.
One more example of "embarrassingly parallel" in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far that doesn't trust the clients can be modeled to work as a public sharing system.
Note: Public resource computing is also very similar to MapReduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.