Distributed OS: Winter 2011 Reputation Systems Paper
What is reputation?
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.
A reputation system's main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others <ref name="krukow" />. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.
How can reputation be used in a distributed environment?
Reputation can be useful in acquiring an understanding of how congruent one's own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity's behaviour in distributed applications. Reputation might be based on the entity's past ability to adhere to a mutual contract with another entity <ref name="krukow">Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]</ref>. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human's, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.
What systems are currently in place?
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers<ref name="ebayreputation">Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]</ref>.
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what's known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system<ref name="javapolicy">Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]</ref>. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file<ref name="android">Android. Google. http://developer.android.com/index.html [March 28, 2011]</ref>. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a "stop-watch" application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS<ref name="ios">iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]</ref> also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.
How can we improve on existing systems?
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order<ref name="ebayreputation" />. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.
Our assumptions
In this system, we will make a set of assumptions. Without these, a system of this size either would not function, or would be of too broad of scope to ever be acceptable.
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged. Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixes set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, if not consistent, state. This is appropriate, given the diversity of the internet at large.
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, as if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system itself.
Generating reputation
How do we represent reputation?
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems<ref name="krukow" />. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott<ref name="krukow" />. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity's computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.
How do we gather reputation?
Gathering reputation information in these kinds of systems will generally follow a push model. When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it's peers, or superiors. This system can either be automated, or policy-based.
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough. What constitutes "important enough" will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.
Where do we store reputation?
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. When this occurs would depend on a variety of factors. First, if a piece of reputation information was requested a lot from other nodes, or it indicated an extreme state. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing given systems to be forgiven.
How do we maintain reputation?
As stated earlier, we need to store an adequate level of information about interactions between entities. This "adequate" level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu<ref name="krukow" />. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a "reduce" function in the sense of Google's Map/Reduce algorithm<ref name="mapreduce">Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]</ref>. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be "reduced". Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of "reduce" mechanism.
How is reputation disseminated?
The dissemination of reputation information is a core concern of reputation systems in general. This vital exchange of information is what allows these systems to function. Ideally, methods of information exchange should provide a given set of features. First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely. Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed. Finally the system needs to be scalable and flexible. While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered: level of trust.
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible: Hierarchy, Publish/Subscribe, and Peer-to-Peer.
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list. A good example of this technology in practice is the domain name system (or DNS, for short). These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility. These systems are very common in the corporate world today, and align well with organizational structure. It also means that if a flaw is detected at the information, manual intervention is possible. Unfortunately, these systems tent to be rife with single points of failure, and scalability issues. In addition, implimenting this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching. finally, there would be the issue of trust in such a system. While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust. Also, if there are a single sets of authorities, then there is the added issue of security. Compromising one system would taint the reputation information across the entire reputation system.
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed. Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS). Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals. First the source publishes an update, and then "subscribers" can receive updates through either a push from the publisher, or a query for updates. This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks <ref name="wifipublishsubscribe">Gajic, B.; Riihijärvi, J.; Mähönen, P.; , "Evaluation of publish-subscribe based communication over WiMAX network," Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 </ref>. Being data-centric, they can be a very helpful way of exchanging information. Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) <ref name="p2ppublishsubscribe">Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , "A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe," Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006</ref>. Unfortunately, there are some drawbacks to these technologies. Mainly it involves some pre-selected, or elected nodes that act as authorities. This creates points of failure, and means that some nodes need to trust others with their authority information. While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node's reputation. This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository. These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.
Finally Peer-to-peer is, perhaps, the newest method of disseminating information. While there are many ways to exchange information in a peer-to-peer fashion, gossiping
In application, all of these methods of information dissemination would likely need to be supported in some fashion. Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large. This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions. Where the line between these two will be drawn is not fixed. Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.
How is reputation queried?
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not. This needs to be done because depending on the size of the system it's highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won't be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying.
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his 'views', or interpretation of the reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not.
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried.
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.
The distributed querying is a little more complex. The querying node will need to decide whom to ask, perhaps asking nodes it trusts if it's been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it's own authority node.
Making decisions
How do we make decisions based on reputation?
Every entity will have its own interpretation of reputation data. There will most likely be a common set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person's personal computer. Entities will need to decide what's important to them, most likely via a human defining which events are worth taking note of and which aren't. It is entirely possible, and likely, that different entities won't record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what's decided upon is something that other entities can understand. That is, using a collection of evidence that's been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity.
<Is that the idea?> looks good - maybe wrap up the idea at the end. we'll see what Trevor has to say. <That should tie it together but again that's only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.>
Implementation
Can we achieve this through incremental updates?
<possible we can... we can use imposed rules or existing infrastructure if we don't have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don't quite know enough to write about this.>
Conclusion
References
<references />
DELETE
Why PKI should be omitted reputation must be trusted = we get this trust through interactions. We BELIEVE this trust because we assume we have attribution!