Talk:COMP 3000 Essay 2 2010 Question 4: Difference between revisions
No edit summary |
|||
Line 290: | Line 290: | ||
- I did some modification in critique and moved it to the front page. - [[User:Jbaubin|Jbaubin]] | - I did some modification in critique and moved it to the front page. - [[User:Jbaubin|Jbaubin]] | ||
- Great. I apologize that I did not manage to get more written. Other courses kept me occupied. - [[User:Sschnei1|Sschnei1]] |
Latest revision as of 00:39, 2 December 2010
Group Essay 2
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - Sschnei1 03:25, 15 November 2010 (UTC)
Sebastian Schneider sschnei1@connect.carleton.ca
Matthew Chou mchou2@connect.carleton.ca
Mark Walts mwalts@connect.carleton.ca
Henry Irving hirving@connect.carleton.ca
Jean-Benoit Aubin jbaubin@connect.carleton.ca
Pradhan Nishant npradhan npradhan@connect.carleton.ca
Only Paul Cox didn't answer i sent this morning.
Cox Paul pcox
And I just sent an email to the teacher.
--Jean-Benoit
Paper
the paper's title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.
Title: Accountable Virtual Machines
Authors: Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel
Affiliates: University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]
Link to Paper: Accountable Virtual Machines
Supplementary Information: Accountable distributed systems and the accountable cloud - background of similar AVM implementation for distributed systems.
Background Concepts
Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.
Accountable Virtual Machine (AVM)
Deterministic Replay: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [1] has contributed a highly efficient snap-shotting mechanism for these replays.
Accountability: Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor.
Remote Fault Detection: There are programs like GridCop [2] that can be used to monitor the progress and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.
Cheat Detection: Cheating in games or any specific modification in a program can be either scanned [3][4] for or prevented [5][6] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.
Integrity Violations: This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.
- The word "node" is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.
Research problem
What is the research problem being addressed by the paper? How does this problem relate to past related work?
- Possible alternative for the first part :
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let's say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper.
- Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. Jbaubin
- looks good to me, we'll put this part into the final essay instead of mine below --Mchou2 20:03, 22 November 2010 (UTC)
/// omit
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let's say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B's server, the server processes the order and then deletes node A's sensitive information, denoted by execution 1, now when node A buys from node C's server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node.
////
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[4] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user's system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user's system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running.
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[7], which is a system closely related to what the research team have worked on, must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended.
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[8] is another example that inspects a small number of packets periodically. Another way of determining the fault remotely is to use a trusted node, where it can tell immediately if a fault occurs or a modification is made where it should not have been made.
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --Mchou2 20:10, 21 November 2010 (UTC)
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.
Contribution
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms. The VMM is based off the VMM found in VMWare Workstation 6.5.1[9], the temper-evident log was adapted from code in PeerReview[7], and the audit tools were built up from scratch.
The accountable virtual machine monitor relies on four assumptions:
1. All transmitted messages are received, retransmitted if needed.
2. Machines and Users have access to a hash function that is pre-image resistant, second pre-image resistant, and collision resistant.
3. All parties have a certified keypair, that can be used to sign messages.
4. To audit a log, the user has a reference copy of the VM used. The job of the AVMM is to record all incoming and outgoing messages to a tamper-evident log and enough info of the execution to enable deterministic replay.
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, and the exact timing of input must be recorded so the inputs can be injected at the same moment during the replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and additional registers. Not all inputs have to be recorded this way (software interrupts) because they send requests to the AVM, which will be issued again during replay.
Two parallels streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. It is important for the AVMM to detect inconsistencies between the user's log and the machine's log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.
The AVMM periodically takes snapshots of the AVM's current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM) and it can update the hash tree after each snapshot.
Tamper-Evident Log
The log is made up of hash code entries. Each log entry in form e = (s,t,c,h) s = monotonically increasing sequence number t = type c = data of the type h = hash value
The hash value is calculated by: h = H(hi-1 || s || t || H(c)) H() is a hash function. || stands for concatenation
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM. To ensure nonrepudiation, an authenticator is attached to each outgoing message.
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to have failed.
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with, the hash chain will be different and the user will notified of the tampering.
Auditing Mechanism
From VMM's perspective all things are deterministic.
To perform a audit, the user:
1. obtains a segment of the machine's log and the authenticators
2. downloads a snapshot of the AVM at the beginning of the segment
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then checks the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavailable to download or may return a corrupted log segment. This can be used to convince a third party of the fault.
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM's state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of the machine's faults.
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.
The semantic check creates a local VM that will execute the machine's log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence against the machine.
Why is it better? [To Do]
- I read through it and fixed a few missing letters here and there, so if someone else could read it as well and then sign under me we can probably move it to the essay. Thanks . --Mchou2 23:53, 25 November 2010 (UTC)
-I just read it and fixed some small parts. Looks good. --Jbaubin
Critique
What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.
// first part of my writing; this is just part1 Sschnei1 00:35, 24 November 2010 (UTC)
The layout of the paper is primordial for the comprehension of the reader. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved.
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is "Cheat Detection". Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player's log.
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server.
As a proof of concept, they used their AVM in the online game Counter Strike and tried to detect online cheats. They were using “Dell Precision T1500 workstations, with 8 GB of memory and 2.8 GHz Intel Core i7 860 CPUs”[pg 10]. These machines are considerably more high powered than the system requirements of Counter-Strike, which are “500 MHz processor, 96 MB RAM”[10]. A 10 year old game [10] should use fewer resources on a Dell Precision T1500 workstations. In comparison, newer games consume far more resources than Counter-Strike giving it less room to run the AVM. A 13% slowdown [pg 12.] in a game where you are only getting 30 to 40 fps is a pretty noticeable slowdown. This is very detrimental to the game play because having over 60fps is the optimal performance.
In the paper the authors state that the AVM will only generate an extra 5ms of latency. While this does not seem like a lot the measurement was taken over a LAN with all the computers connected to the same switch [pg. 12]. This sample does not accurately represent real life situations and therefore lacks external validity, since many of these online games are played over the internet with the participants sometimes not even on the same continent; the latency overhead of the AVM would certainly increase due to the added distance. [12]
Additional Critiques:
While the paper does test a slightly larger than one to one scenario, it certainly does not test in a real world environement where 16,32 or even 64 players would be playing in the sametime.
Spot checking can be used for applications that require snapshots every x seconds. Even if this way remove a lot of overhead and data storage, it only verify if the applications or user is working as intended every x second. Thus, someone could find the patern of those snapshots and render the AVM inutile.
AVM's are extremely effective against two types of cheating, that which gives incorrect networking messages and the one that has to be loaded with the game. This is the perfect world for tournaments competition type of game, but in a real world this wouldn't be of much use. Games get patched, users download add-ons for the game, etc. Every patch or add-ons would require a new AVM which is unreasonable for the amount of people playing the game. A solution brought from the team was to disable the right to install anything on the AVM. As this could work in a tournament environment, a normal users at home would not be pleased with this limitation.
An AVM's will not in any way catch any bug or exploit in a program that a malicious user could exploit, as the exploit would appear on both user/monitor systems and perform the same.
// more Critiques
For their use case, the authors did not state that in counterstrike the user can record a demo of his current game. Some online playing leagues require every player to record his own demo and upload it to the website, where every person in the league can watch it. Without this demo the team lost the match immediately. Additionally, some leagues require the player to start an extra program (e.g. Electronic Sports League WIRE), which checks the programs running in the background. It also takes random snapshots of the current player and compresses all information into a file and uploads it to one of the server in the online league, where it can be checked by any player.
References
You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and
A. Warfield. Remus: High availability via asynchronous virtual
machine replication. In Proceedings of the USENIX Symposium
on Networked Systems Design and Implementation (NSDI), Apr.
2008.
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but verify: Monitoring remotely executing programs for progress and correctness. In Proceedings of the ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), June 2005.
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware. http://www.rootkit.com/blog.php?newsid=358.
[4] PunkBuster web site. http://www.evenbalance.com/.
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof playout for centralized and peer-to-peer gaming. IEEE/ACM Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online games against cheating. In Proceedings of the Workshop on Network and Systems Support for Games (NetGames), Oct. 2006.
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical accountability for distributed systems. In Proceedings of the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but verify: Monitoring remotely executing programs for progress and correctness. In Proceedings of the ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), June 2005.
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/
[10] Counter-Strike http://store.steampowered.com/app/10/
[12] Larry L. Peterson and Bruce S. Davie. Computer Networks a Systems Approach, 2007
Discussion
We can use this area to discuss or leave notes on general ideas or whatever you want to write here.
-The current due date posted on the site for this essay is November 25th --Mchou2 05:18, 19 November 2010 (UTC)
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it's really up to you. I know that most people have a lot of projects coming up so let's try to get this done asap, or at least bit by bit so it's not something we have to worry too much about. --Mchou2 05:18, 19 November 2010 (UTC)
- I would like to do the Contribution or Critique. -- Sschnei1 02:40, 20 November 2010 (UTC)
- I can either work on Background Concepts, or Research problem. -Jbaubin
- I'm not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --Mchou2 18:11, 21 November 2010 (UTC)
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I'll be out for a while, but tonight I'll take a serious look at what you write and add what I had written. - Jbaubin
- Sorry I didn't write anything yet to Critique. I'm making my notes and will post something tonight or tomorrow. -- Sschnei1 14:50, 22 November 2010 (UTC)
- I have started work on the contribution section. I'll have something up today or tomorrow. --Hirving 19:55, 23 November 2010 (UTC)
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I'm sure everyone is aware of the extension that we got also, but let's try to finish this in the next few days --Mchou2 20:43, 23 November 2010 (UTC)
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- Sschnei1 21:29, 23 November 2010 (UTC)
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn't do the "why is it better" part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I'd suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --Hirving 19:44, 24 November 2010 (UTC)
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- Sschnei1 20:39, 24 November 2010 (UTC)
-Hay. I got a bit to add to your Critique section section. Its mostly expanding on your last paragraph and a bit on how the tests were performed. ill post my stuff later tonight, I just need to find some sources for my argument.--Pcox 01:06, 25 November 2010 (UTC)
-I read through critiques and will post some modification. I was wondering the last point of critique says that the author didn't mention recording the game. In page 2 they did, and "However, replay by itself is not sufficient to detect faults on a re- mote machine, since the machine could record incorrect information in such a way that the replay looks correct, or provide inconsistent information to different auditors " So should we remove the last point? Also, the second and third paragraph in the Critique does not critique really anything but more states the contribution in the paper, should we keep it?
- Sure post some modifications. What I meant in the first part is, that the game has an internal recording mechanism to record a 1:1 video of your in-game screen, which can be replayed from in-game itself. I thought it's useful to put in, but if it's unnecessary for the paper, then we can take it out.
- I did some modification in critique and moved it to the front page. - Jbaubin
- Great. I apologize that I did not manage to get more written. Other courses kept me occupied. - Sschnei1