<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mdpless2</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mdpless2"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Mdpless2"/>
	<updated>2026-04-22T22:28:34Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9507</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9507"/>
		<updated>2011-04-12T03:55:14Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. This occurrence would depend on a variety of factors. First, if a piece of reputation information was requested frequently from other nodes, the information would be regarded as highly valuable and therefore kept for future reference. If a piece of reputation information was very infrequently used, it might be remove or labelled for deletion at some future point. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing systems to maintain a level of forgiveness.&lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tend to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.&amp;lt;ref name=&amp;quot;repest&amp;quot;&amp;gt;Xing Jin, S.-H. Gary Chan, &amp;quot;Reputation Estimation and Query in&lt;br /&gt;
Peer-to-Peer Networks&amp;quot;, IEEE Communications Magazine, April 2010. http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf &amp;lt;/ref&amp;gt; In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. &amp;lt;ref name=&amp;quot;repest&amp;quot; /&amp;gt; Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.&amp;lt;ref name=&amp;quot;EviMod&amp;quot;&amp;gt;Bin Yu, Munindar P. Singh, &amp;quot;An Evidential Model of Distributed Reputation Management&amp;quot;, AAMAS’02, July 19, 2002, http://portal.acm.org/citation.cfm?id=544809&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&amp;amp;retn=1#Fulltext &amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
&lt;br /&gt;
The implementation and deployment of such a reputation system is a very difficult task. Ideally, all systems would simultaneously switch over to a new protocol for reputation management. On a distributed system as large as the web, this is highly improbable. Typically, the success of updates and layers built on top of the web&#039;s existing architecture comes down to the fact that they are incrementally deployable. Updates are incremental and so the entire system is not succumbed to a system-wide blackout.&lt;br /&gt;
&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
The key question is whether we can deploy this reputation system using incremental updates.  Obviously, a large-scale wholesale changeover wouldn&#039;t be palatable to anyone.  Organizations and individuals are historically, and understandably reticent to change.  That said, it is very likely that such a large change in operating mentality will require adoption at both the corporate level, and the individual level.&lt;br /&gt;
	&lt;br /&gt;
Basically, phasing this in will rely on companies deciding that it is in their own best interests to have this running locally. Individuals part of the greater organization would then have to decide to switch to the gossip-based solution. Eventually, an emergent and cohesive system would appear. Reputation is currently facilitated by justice systems and imposed rules for entities within systems. We can continue to use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the environment and eventually have a full-fledged emergent reputation system. This evolutionary system is much preferable to the alternative revolutionary system because it avoids the disruption that a revolutionary change necessitates.  &lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
This paper has covered what reputation is, and how it can be applied to computer networks. We have discussed what constitutes reputation, and how it can be useful for judging the nature of a member of a reputation system. This leads to why reputation is useful; it provides a means for quickly judging another system, and a distributed reputation system would allow for members of that system to judge another based upon past actions.  Such a system would need to have and allow for imposed rules to punish actions that are universally shunned, such as the distribution of malware. It would also need to allow for emergent rules, as many entities in a system will have different views as to what would constitute a reputation lowering event. Existing systems for reputation, which are peer-to-peer and policy-based, are not suitable for a completely open, large distributed network, such as the internet. &lt;br /&gt;
&lt;br /&gt;
The central idea of a reputation system is the need for a distributed means of storing the history of reputation events, to provide evidence that can be used to justify a reputation. For a history store to work however, the reputation events must first be observed by an entity. These events would need to be stored and maintained in such a way as to allow for massive scalability. They would then need to be disseminated in some fashion. This could be a hierarchy where information is regulated by an authority, It could be a publish/subscribe model, where reputation information is published by one entity an subscribed to by others, or a peer-to-peer system, where information would be disseminated in a gossip-based fashion. Events will however arise when a member of the system doesn&#039;t know everything it needs to in order to make a decision. In this case, it will need a means to query its peers for information. When this entity has enough information, either via dissemination or querying, it can safely connect to another entity; leading to a successful implementation of a reputation system.  &lt;br /&gt;
&lt;br /&gt;
Reputations systems have already formed an important part of the internet in some areas.  It is very likely that they will continue to do so in the future, and their scope is likely to increase. This paper presented an overview of current reputation systems, as well as providing an outline of how the idea of a reputation system can be implemented on an internet-size scale. By dividing up the problem of designing and implementing a reputation system into several smaller components, this paper tackled the complicated questions associated with the overall architecture of a reputation system and how such a system can be created in a way to satisfy the multitude of stakeholders that exist in the cloud.  While such a system might not be immediately implementable, it is likely that such a system would provide a tangible long-term benefit in the future.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Reputation&amp;diff=9485</id>
		<title>DistOS-2011W Reputation</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Reputation&amp;diff=9485"/>
		<updated>2011-04-12T02:39:17Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* PAPER */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==PAPER==&lt;br /&gt;
&lt;br /&gt;
Our final paper can be found here (Over the past week, the grunt of our efforts are displayed on the &amp;quot;final-paper&amp;quot; wiki page).&lt;br /&gt;
* [[Distributed OS: Winter 2011 Reputation Systems Paper]]&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
* Waheed Ahmed&lt;br /&gt;
* Trevor Gelowsky&lt;br /&gt;
** MSN: Gelowt@gmail.com&lt;br /&gt;
** E-Mail:  tgelowsk@sce.carleton.ca&lt;br /&gt;
* Michael Du Plessis&lt;br /&gt;
* Nicolas Lessard (nick.lessard @t gmail.com / nlessard @t carleton.connect.ca)&lt;br /&gt;
&lt;br /&gt;
==Our presentation==&lt;br /&gt;
Our current presentation can be viewed at the following link: https://docs.google.com/present/edit?id=0ASS7kj9hfc1aZGRiMjMzOHJfNGhnNzhuamRr&amp;amp;hl=en&amp;amp;authkey=CMHi3KAD&lt;br /&gt;
&lt;br /&gt;
==The problem==&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
&lt;br /&gt;
==What technologies currently exist?==&lt;br /&gt;
* Digital signatures&lt;br /&gt;
** Certificates signed by trusted organizations&lt;br /&gt;
&lt;br /&gt;
* Black hole- email, spam,&lt;br /&gt;
* Google - search reputation&lt;br /&gt;
* Credit bureaus&lt;br /&gt;
* Yellow pages&lt;br /&gt;
* Better business bureau&lt;br /&gt;
* CRC - criminal records&lt;br /&gt;
&lt;br /&gt;
== What technologies don&#039;t currently exist?==&lt;br /&gt;
&lt;br /&gt;
==Guaranteeing Authenticity/Public Key Infrastructure==&lt;br /&gt;
&lt;br /&gt;
In our paper we must explain why PKI/Authentication fits into reputation. Why must it be handled by both Attribution and Reputation systems?&lt;br /&gt;
&lt;br /&gt;
===Problem Domain===&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation information being exchanged is guaranteed to be authentic.&lt;br /&gt;
&lt;br /&gt;
*How do we ensure the information exchanged between peers authentic, and not tampered with?&lt;br /&gt;
&lt;br /&gt;
*How to we attribute information exchanged?&lt;br /&gt;
&lt;br /&gt;
*How do we do this in a highly decentralized, distributed system?&lt;br /&gt;
&lt;br /&gt;
*How can we make sure the information is timely?&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
In past few years, Internet has provided platform for a global market place and both business and private users realizes that the revolutionary communications opportunities provided by it will give way to large spectrum of business and private applications.Today online users face multitude of problems and issues like vulnerability to viruses , worms , exposure to sniffers, spoofing their private sessions not only this but also they are also subjected to invasion of privacy with multitude of spy ware available for monitoring how they behave. Today over the internet different kind of activities take place ranging from access to information to entertainment, financial services, product services and even socializing. The frequent usage of internet as an important business tool led to a major increase in deliberate abuse and criminal activities. All the organization operating electronically and trading expose their own information and IT systems to a wide range of security threats. The most common protocols like IP/TCP/UDP are the main targets of potential hackers. Its all because of IPs on which attacks are possible and don&#039;t have proper authentication mechanism for any incoming data over internet.  &lt;br /&gt;
&lt;br /&gt;
In order to build secure chain of trust Public-Key Infrastructure is used for internet based communication. It consists of various things like security policy , Certificate authority , registration authority , certificate distribution system PKI enabled applications.&lt;br /&gt;
&lt;br /&gt;
===PKI===&lt;br /&gt;
With development of modern e-commerce based businesses which has minimal customer face-to-face interactions is demanding more security and integrity. The online web based stores where huge amount of transactions take place needs to ensure customers that there information is confidential and processed through a secure channel. This is where implementation of PKI steps in to provide mechanisms to ensure trusted relationships are established and maintained. The specific security functions in which a PKI can provide foundation are confidentiality, integrity, non-repudiation,and authentication.&lt;br /&gt;
&lt;br /&gt;
PKI provides mean of guaranteeing the authenticity by issuing digital signature. In order to ensure that electronic document is authentic , which means knowing the person who created the document and it hasn&#039;t been modified. They are commonly used for software distribution, financial transaction or to detect forgery or tampering.  Further to ensure the authentication digital signature relies on certain type encryption. Digital certificate is mode of encryption  on large scale  for example secure web server. Digital certificate are validated by PKI by verifying the authenticity of certificate , the validity of certificate and that the certificate is trustworthy. They are issued by an authority referred as Certification Authority. The certificate authority act as the middleman that both computer trust. This helps in avoiding man in the middle attack , certificate authority confirms that computer is in fact who they say they are and the provides the public key of each computer to the other. The digital certificate contains the public key of the entity identified in certificate , it matches a public to a particular individual, and that certificate&#039;s authenticity is guaranteed by the issuer, thus digital certificate provides solution to the problem of how to find user&#039;s public key and know that its valid.These problems are solved by a user obtaining another user&#039;s public key from the digital certificate. The user knows it is valid because a trusted certification authority has issued the certificate. For their own authentication digital certificate rely on public key cryptography. The certification authority signs the certificate with its own private as digital certificate is issued. To validate the authenticity of a digital certificate, a user can obtain that certification authority&#039;s public key and use it against the certificate to determine if it was signed by the certification authority.&lt;br /&gt;
&lt;br /&gt;
===Issues Faced by DoD using PKI===&lt;br /&gt;
I found out there are many different implementations of PKI , and they all focuses on their own issues and solutions. For example PKI used in DoD have following issues&lt;br /&gt;
&lt;br /&gt;
*Lack of PKI-enabled eCommerce applications and lack of interoperability among PKI applications&lt;br /&gt;
&lt;br /&gt;
*DoD is developing a single high assurance PKI&lt;br /&gt;
&lt;br /&gt;
*Very High Cost Impact to the EC/EB community.&lt;br /&gt;
&lt;br /&gt;
*The PKI community lacks metrics for mapping of trust models between the DoD :”high assurance” C2 and EC/EB domains&lt;br /&gt;
&lt;br /&gt;
*Education of everyone (policy maker through user) to a common level of understanding is a huge challenge.&lt;br /&gt;
&lt;br /&gt;
*While the purpose of using PKI in EC/EB is to provide additional trust to allow the Internet to serve as a vehicle for legally binding transactions , problems still exist with the methodologies associated with establishing a long-term burden of proof. Specifically, there are no widely adopted industry standards for maintenance of electronic signatures or for authenticated timestamps for record maintenance that have stood the test of time. These processes are untried and the case law has not yet been established to convince users that there are no issues with enforcement of these new processes. An additional barrier to EC/EB within this space is the current DoD Certificate policy in which DoD accepts&lt;br /&gt;
&lt;br /&gt;
===Common Issues With PKI Implementation===&lt;br /&gt;
&lt;br /&gt;
*Commercial Off-The-Shelf (COTS) versus Customised applications : The choice between COTS or customised products is usually one of cost versus usability. In case of usability the thing to be focused should be error messages. If PKI is built int o applications (transparent to users) than its fine if not than user will require to have some understanding of the use of keys, certificates, Certificate Revocation Lists (CRLs) and directories/certificate repositories so that they can make informed decisions.&lt;br /&gt;
&lt;br /&gt;
*Token Logistics (smart card): The point where keys and certificates are linked to their owner is a very critical point in a PKI. If a fraudulent certificate is issued by a registration officer and the certificate holder uses the certificate to commit a crime or prank, trust in the whole PKI hierarchy may be lost. The physical security requirements are high, and the registration officer, whether a person or a smartcard bureau, must be subject to strict security polices and practices. As it was problem with DoD mentioned in section above.&lt;br /&gt;
&lt;br /&gt;
*Network issues - Traffic : There is no doubt that the implementation of PKI will add to the network load, although just how much depends on the system architecture. Potential additional traffic that should be considered includes: Certificate issuance, Email usage, CRLs , and Directory Replication&lt;br /&gt;
&lt;br /&gt;
*Network issues - Encryption : Many organisations implement anti-virus software and content inspection on servers at the perimeter of their networks. Some have security policies that rejects or quarantines encrypted traffic. To provide user-to-user confidentiality, messages will traverse networks with their payload hidden from inspection by virus and content checking.&lt;br /&gt;
&lt;br /&gt;
*Email address in certificate :In order to use certificates for S/MIME signed/encrypted email, the users’ email address must be in the certificate. Most people change their email addresses more frequently than the certificate. Unless a solution is built which allows users to keep the same email address over a long period, certificates would have to be re-issued every time a user changes email address. S/MIME v.3 stipulates that the receiving application must check the From: or Sender: field in the mail header and compare it to an email address in the sender’s certificate. If the check does not match, the mail application should perform another explicit check to ensure that the person who signed the message is indeed the person who sent it. As usual, the ‘devil is in the detail’ when it comes to implementation.&lt;br /&gt;
&lt;br /&gt;
*Certificate Validity Checking:CRLs have been the conventional method of providing certificate validity checking. CRLs do not scale very well as discussed earlier, but are usually kept for backward compatibility, archiving/historical verification and for use in off-line mode. The other issue with CRLs is that they are generally issued at certain intervals of 6, 12 or 24 hours, causing a time lag from the time a certificate is revoked until it appears on the published CRL. This may present a security risk, as a certificate may verify correctly after it has been reported as compromised and revoked; (however some would argue that the time from actual compromise until the discovery and reporting of it would in most cases be a more significant lag). The Online Certificate Status Protocol (OCSP) (RFC2560) allows a client to query an OCSP responder for the current status of a certificate. This saves searching through a large CRL and can save bandwidth if the CRL would normally be downloaded - although it may increase network traffic. Most OCSP responders are based on CRLs and thus do not solve the problem of time lag as outlined above.&lt;br /&gt;
&lt;br /&gt;
*Availability and storage of reliable user information : For an identity certificate scheme, names in certificates need to be unique, meaningful - and correct. Few large user communities have all their member details in a central and accurate database or directory, and the exercise of consolidating, checking and updating all user data can turn into a massive and expensive exercise.&lt;br /&gt;
&lt;br /&gt;
*Archiving/historic verification : Digital signatures need to be verifiable even after the keys used to sign have expired. Likewise, we need to be able to verify that the certificate was valid at the time the datawas signed. This means we would need to archive: the signed file,the public key certificate of the signer, the CRL that was valid at the time of signing, a reliable timestamp to prove the accuracy of the time of signing and, the hardware environment that can run the software that was used at the time&lt;br /&gt;
&lt;br /&gt;
===What are the solution to these problems?===&lt;br /&gt;
&lt;br /&gt;
*Identity Based Encryption:enables senders to encrypt messages for recipients w/o requiring that a recipients key first be established, certified, and published.&lt;br /&gt;
&lt;br /&gt;
*Certificate-based encryption:it incorporates IBE methods, but uses double encryption so that its CA cant decrypt on behalf of the user.&lt;br /&gt;
&lt;br /&gt;
*Certificateless Public Key Cryptography:it incorporates IBE methods, using partial private keys so PKG cant decrypt on behalf of the user.&lt;br /&gt;
&lt;br /&gt;
*Distributed Computation:There exists methods that distribute cryptographic operations so that the cooperative contribution of a number of entities is required in order to perform an operation such as a signature or decryption. It helps in tighter protection at servers vs clients, but implies that the users mist fully trust servers to apply keys appropriately.&lt;br /&gt;
&lt;br /&gt;
*Alternative Validation Strategies:Hash tree:it offers compact protected representations of the status of large number of certiticates.Highly valued if PKI is operated large scale more benifical than Certificate Revovation List. CRL reflect ststus information at fixed intervals.&lt;br /&gt;
&lt;br /&gt;
==Dissemination==&lt;br /&gt;
&lt;br /&gt;
===The Problem Domain===&lt;br /&gt;
&lt;br /&gt;
===Random Ramblings on Reputation Management and Distribution===&lt;br /&gt;
&lt;br /&gt;
Publish/Subscribe?&lt;br /&gt;
&lt;br /&gt;
This system has unique distribution requirements as compared to most distributed systems in general.  In this system, we cannot assume that there will be a universally agreed-upon definition of good, or bad.  Similarly, the system must be self-policing.  It would be up to each and every group of autonomous systems to decide which updates to accept and reject.  Updates themselves also should not cause the network to DDoS itself.  Lastly, it would be impossible for every system to know what the reputation for a given system is.  Therefore the system must disseminate information in some way that is query-able and localizes reputation information where required.&lt;br /&gt;
&lt;br /&gt;
To this end, we need a way of spreading information that while reliable, does not depend on one universally agreed-upon set of reputations.&lt;br /&gt;
&lt;br /&gt;
For example, on an internet-scale operating system it would be entirely reasonable for one group of systems to not want to accept updates, or want to avoid communication with a given series of systems.&lt;br /&gt;
&lt;br /&gt;
Any solution would assume that the problems of attribution are solved.&lt;br /&gt;
&lt;br /&gt;
===Current Examples of Reputation Dissemination===&lt;br /&gt;
&lt;br /&gt;
The first protocol that immediately comes to mind in this situation is a gossip-based protocol.  These protocols are designed to operate in highly decentralized, large-scale systems.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a nice overview:&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4537308 &amp;quot;Reputation management in distributed systems&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Examples are as follows:&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4228013 &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=5569965 &amp;quot;Improving Accuracy and Coverage in an Internet-Deployed Reputation Mechanism&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4459326 &amp;quot;GossipTrust for Fast Reputation Aggregation in Peer-to-Peer Networks&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4777496 &amp;quot;Adaptive trust management in P2P networks using gossip protocol&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Another possibility is using &amp;quot;Reputation chains&amp;quot;&lt;br /&gt;
* http://dx.doi.org.proxy.library.carleton.ca/10.1109/TKDE.2009.45 &amp;quot;P2P Reputation Management Using Distributed Identities and Decentralized Recommendation Chains&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Maintaining History==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem domain===&lt;br /&gt;
&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
*** Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed?&lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
** Distributed storage systems. Reputation in real-life is stored in interactions that an entity has with others. Reputation is not stored centrally. Reputation is most often a shared view of an entity by the masses, but sometimes an entities reputation can be disjoint among the masses: many different entities having differing views of reputation for the same entity.&lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
** (should I mention this?)&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
** (should I mention this?)&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
** Impose/Emerge problem: reputation for an interaction can be calculated immediately or can be a function of time.&lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
** Trusting the masses is generally a good way of ensuring trustworthiness. Imposed rules will not always fit every situation well - could potentially set bad reputation to a &amp;quot;good&amp;quot; entity.&lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
** Do we maintain an ever-growing set of history items for interactions between entities? Do we look focus on the bad reputations?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
* What assumptions will we make?&lt;br /&gt;
* Privacy issues? What will we reveal? Will centralized systems have a know-all mentality?&lt;br /&gt;
** Fine grained information will never be revealed (privacy concerns and user rights)&lt;br /&gt;
* Which history should I maintain? What to take as important, what to disregard?&lt;br /&gt;
* Immutable data structure. Who could add data? Who could remove data? Authority&lt;br /&gt;
&lt;br /&gt;
===Reputation systems===&lt;br /&gt;
* record, aggregate, distribute information about an entity&#039;s behaviour in distributed applications&lt;br /&gt;
&lt;br /&gt;
* reputation might be based on the entity&#039;s past ability to adhere to a license agreement (mutual contract between issuer and licensee)&lt;br /&gt;
&lt;br /&gt;
===History-based access control systems===&lt;br /&gt;
* make decision based on an entity&#039;s past security-sensitive actions&lt;br /&gt;
&lt;br /&gt;
===Examples of reputation systems (trust-informing technologies)===&lt;br /&gt;
* eBay - Feedback forum (positive, neutral, negative)&lt;br /&gt;
&lt;br /&gt;
===Do reputation systems have some validity?===&lt;br /&gt;
&lt;br /&gt;
Resnick et al. argue that reputation systems&lt;br /&gt;
foster an incentive for principals to well-behave because of “the expectation of&lt;br /&gt;
reciprocity or retaliation in future interactions&lt;br /&gt;
&lt;br /&gt;
Abstractions are used to model the aggregated information of each entity. These abstractions may not encompass the full details of transactions and provide context to specific issues relating to feedback. In turn we can end up with ambiguous values.&lt;br /&gt;
&lt;br /&gt;
So we need a system that provides sufficient information in order to verify the precise properties of a past behaviour.&lt;br /&gt;
&lt;br /&gt;
* Krukow, K. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK. (March 3, 2011) [http://www.brics.dk/~krukow/research/publications/online_papers/concrete-jcs.pdf]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
Reputation systems are meta systems that record, aggregate and distribute information about principals’ behaviour in distributed applications. Similarly, history-based access control systems make decisions based&lt;br /&gt;
on programs’ past security-sensitive actions. While the applications are&lt;br /&gt;
distinct, the two types of systems are fundamentally making decisions&lt;br /&gt;
based on information about the past behaviour of an entity.&lt;br /&gt;
A logical policy-centric framework for such behaviour-based decisionmaking is presented. In the framework, principals specify policies which&lt;br /&gt;
state precise requirements on the past behaviour of other principals that&lt;br /&gt;
must be fulﬁlled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and&lt;br /&gt;
eﬃcient dynamic algorithms for checking whether a particular behaviour&lt;br /&gt;
satisﬁes a property from the language. It is shown how the framework can&lt;br /&gt;
be extended in several ways, most notably to encompass parameterized&lt;br /&gt;
events and quantiﬁcation over parameters. In an extended application, it&lt;br /&gt;
is illustrated how the framework can be applied for dynamic history-based&lt;br /&gt;
access control for safe execution of unknown and untrusted programs.&lt;br /&gt;
&lt;br /&gt;
* Khosrow-Pour, M. Emerging trends and challenges in information technology management (March 7, 2011) [http://books.google.ca/books?id=ybzS-yylJfAC&amp;amp;lpg=PA822&amp;amp;ots=V7hn_RzqXA&amp;amp;dq=maintaining%20history%20in%20reputation%20systems&amp;amp;pg=PA822#v=onepage&amp;amp;q=maintaining%20history%20in%20reputation%20systems&amp;amp;f=false]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
&lt;br /&gt;
* Bolton, G. et al. How Effective are Electronic Reputation Mechanisms?  (March 10, 2011) [http://ccs.mit.edu/dell/reputation/BKOMSsub.pdf]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
&lt;br /&gt;
Electronic reputation or “feedback” mechanisms aim to mitigate the moral hazard problems &lt;br /&gt;
associated with exchange among strangers by providing the type of information available in &lt;br /&gt;
more traditional close-knit groups, where members are frequently involved in one another’s &lt;br /&gt;
dealings.  In this paper, we compare trading in a market with electronic feedback (as &lt;br /&gt;
implemented by many Internet markets) to a market without, as well as to a market in which the &lt;br /&gt;
same people interact with one another repeatedly (partners market).   We find that, while the &lt;br /&gt;
feedback mechanism induces quite a substantial improvement in transaction efficiency, it also &lt;br /&gt;
exhibits a kind of public goods problem in that, unlike the partners market, the benefits of trust &lt;br /&gt;
and trustworthy behavior go to the whole community and are not completely internalized.  We &lt;br /&gt;
discuss the implications of this perspective for improving these systems.&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation is generated from the information exchanged between systems and how/where it is stored.&lt;br /&gt;
&lt;br /&gt;
==Problem domain:==&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation is generated from the information exchanged between systems and how/where it is stored.&lt;br /&gt;
&lt;br /&gt;
Problem domain:&lt;br /&gt;
&lt;br /&gt;
•	Emerge vs. Impose reputation on the system?&lt;br /&gt;
•	Probably both, how do we account for both systems?&lt;br /&gt;
•	Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed?&lt;br /&gt;
•	Where do you store the data?&lt;br /&gt;
•	Distributed storage systems. Reputation in real-life is stored in interactions that an entity has with others. Reputation is not stored centrally. Reputation is most often a shared view of an entity by the masses, but sometimes an entities reputation can be disjoint among the masses: many different entities having differing views of reputation for the same entity.&lt;br /&gt;
•	Who do we trust for this information?&lt;br /&gt;
•	Trusting the masses is generally a good way of ensuring trustworthiness. Imposed rules will not always fit every situation well - could potentially set bad reputation to a &amp;quot;good&amp;quot; entity.&lt;br /&gt;
•	Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
•	Do we maintain an ever-growing set of history items for interactions between entities? Do we look focus on the bad reputations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Existing systems&lt;br /&gt;
•	Peer-based systems (emerge)&lt;br /&gt;
•	eBay - positive/negative rating system&lt;br /&gt;
•	Youtube - like/dislike/spam comment system&lt;br /&gt;
•	Policy-based systems (impose)&lt;br /&gt;
•	Java - policy based security&lt;br /&gt;
•	Android - policy based security&lt;br /&gt;
These two systems are on opposite ends of the Emerge-Impose spectrum.&lt;br /&gt;
EigenTrust system&lt;br /&gt;
The EigenTrust system utilizes a numerical scale for reputation storage.&lt;br /&gt;
Advantages:&lt;br /&gt;
•	Numerical values are easy to compare.&lt;br /&gt;
•	Little required storage space.&lt;br /&gt;
Disadvantages:&lt;br /&gt;
•	Information is lost in the abstraction process.&lt;br /&gt;
o	No concrete data&lt;br /&gt;
•	Ambiguity&lt;br /&gt;
Storing concrete data&lt;br /&gt;
Hence, with the given information from the reputation system, we cannot generate an accurate profile of the entity.&lt;br /&gt;
We need a system that represents reputation in a concrete form&lt;br /&gt;
“If principal p gains access to resource r at time t, then the past behavior of p up until time t satisfies requirement ψr.”&lt;br /&gt;
Advantages&lt;br /&gt;
•	A sufficient amount of information is available to come up with a profile of an entity&lt;br /&gt;
Disadvantages&lt;br /&gt;
•	Storage space&lt;br /&gt;
Shmatikov and Talcott&lt;br /&gt;
Histories are sets of time-stamped events. Reputation is based on ability to adhere to licenses.&lt;br /&gt;
Licenses might permit certain actions OR require certain actions from being performed.&lt;br /&gt;
Advantages:&lt;br /&gt;
•	Store data in concrete form&lt;br /&gt;
Disadvantages:&lt;br /&gt;
•	No notion of sessions (logically connected set of events)&lt;br /&gt;
Representation of reputation&lt;br /&gt;
If we consider reputation information to encompass the events and actions of an entity, then we can model reputation as a set of events.&lt;br /&gt;
An interesting new problem is how to re-evaluate policies efficiently when new information becomes available.&lt;br /&gt;
This problem is known as “dynamic model-checking”. &lt;br /&gt;
Dynamic model-checking&lt;br /&gt;
We want a way of summarizing past reputations.&lt;br /&gt;
A solution here is to use: Havelund and Rosu, based on the technique of dynamic programming, used for runtime verification&lt;br /&gt;
•	Given some information on an entity, how do we convert/abstract this to reputation? &lt;br /&gt;
•	Is all the information necessary to maintain? &lt;br /&gt;
•	Now that we have the reputation information, what can we do with it?&lt;br /&gt;
o	What can we compare it with?&lt;br /&gt;
Implementation&lt;br /&gt;
Desired functionality of the system:&lt;br /&gt;
new()&lt;br /&gt;
- Append new reputation information&lt;br /&gt;
update()&lt;br /&gt;
-	Update and summarize past behavior&lt;br /&gt;
-	This is a reduce function&lt;br /&gt;
check()&lt;br /&gt;
- Analyze whether the given reputation satisfies the criteria of the policy&lt;br /&gt;
&lt;br /&gt;
==Querying Reputation==&lt;br /&gt;
&lt;br /&gt;
=== Problems ===&lt;br /&gt;
&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
***If you want to know someone&#039;s reputation, you either need to start asking around for it, imposing yourself. Or you need the data to be sent around, so you already have access to it; emergent. &lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
***You need to know who has the data to ask them for it, or to go get it yourself. &lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
***First you need to know who&#039;s storing it. then you need to know if you&#039;re allowed to ask that node directly, do you ask a intermediary keeper of data. Will you even need to Query-- that is, do you already have all you need to know on hand? you need not get the latest updates on a node if every other node who&#039;s ever talked to it got DDOSed. (or do you?)&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
***Should I make my own definition for bad reputation, and query if someone engaged in activities I consider bad, or should their be a global agreed upon reputation?&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
***Who should I ask for information from? &lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
***Whoever you trust, presumably their opinion on a given node is more important then a node you trust less. &lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
***topically, would you bother asking for 10 year old reputation data on a node, if it&#039;s been a model citizen for the last 9?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
***Should I ask everyone I trust for an opinion on a given node, or just certain keepers of trust data?  &lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
***You query someone in the know who you trust and are allowed to query. &lt;br /&gt;
***you could say, ask everyone you know and trust, and ask them to ask people they know and trust, (and so on...if they&#039;re willing) until you find a node with the information you need.&lt;br /&gt;
***in a more centralized system you need to ask some kind of keeper of information for the information you want, and that keeper may or may not provide you with the reputation info you want. &lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
***The ability to control this would depend on how centralized a system you have. In a truly distributed system where every node has an opinion on any other node they&#039;ve talked to you&#039;ll be able to find somebody who can tell you about the CIA node, but in a more centralized system the keepers of information might be less...willing to give Joe 6 cores information on who Iran is DDOSing. &lt;br /&gt;
&lt;br /&gt;
===Maybe References===&lt;br /&gt;
http://www.kirkarts.com/wiki/images/1/13/Resnick_eBay.pdf - &#039;&#039;Trust Among Strangers in Internet Transactions:&lt;br /&gt;
Empirical Analysis of eBay’s Reputation System&#039;&#039; (maybe not too relevant)&lt;br /&gt;
&lt;br /&gt;
http://portal.acm.org/citation.cfm?id=544741.544809 - &#039;&#039;An Evidential Model of Distributed Reputation Management&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://portal.acm.org/citation.cfm?id=775152.775242&amp;amp;type=series%EF%BF%BD%C3%9C -- &#039;&#039;The EigenTrust Algorithm for Reputation Management in&lt;br /&gt;
P2P Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.2297&amp;amp;rep=rep1&amp;amp;type=pdf -- &#039;&#039;A Robust Reputation System for Mobile Ad-hoc&lt;br /&gt;
Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8729&amp;amp;rep=rep1&amp;amp;type=pdf -- &#039;&#039;EigenRep: Reputation Management in P2P Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf -- &#039;&#039;Reputation Estimation and Query in Peer-to-Peer Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is another paper that might be interesting for you. -- Lester&lt;br /&gt;
http://dcg.ethz.ch/publications/netecon06.pdf&lt;br /&gt;
&lt;br /&gt;
==Possible implementations==&lt;br /&gt;
==Implementation Requirements==&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Joel Weise : &amp;quot;Public Key Infrastructure Overview &amp;quot; http://www.sun.com/blueprints/0801/publickey.pdf Accessed 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
* Security Glossary : http://www.cafesoft.com/support/security-glossary.html Accessed on 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
* Mattila, Anssi; and Mattila, Minna &amp;quot;What is the Effect of Product Attributes on Public-Key Infrastructure adoption? &amp;quot; http://internetjournals.net/journals/tir/2006/January/Paper%2003.pdf Accessed on 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
*Electronic Commerece Conference , PKI Sub-Group ,  Issue Paper : http://www.defense.gov/dodreform/ecwg/pki.pdf date accessed 5th March 2011&lt;br /&gt;
&lt;br /&gt;
*SANS Institute InfoSec Reading Room, Common issues in PKI implementations - climbing the Slope of Enlightenment : http://www.sans.org/reading_room/whitepapers/authentication/common-issues-pki-implementations-climbing-slope-enlightenment_1198 date accessed 15th March 2011&lt;br /&gt;
&lt;br /&gt;
*&amp;quot;Understanding Digital Certificate&amp;quot; by Microsoft : http://technet.microsoft.com/en-us/library/bb123848%28EXCHG.65%29.aspx date accessed 3rd April 2011&lt;br /&gt;
&lt;br /&gt;
*&amp;quot;How digital certificate works&amp;quot; by IBM: http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/index.jsp?topic=/com.ibm.mq.csqzas.doc/sy10580_.htm date accessed 3rd April 2011&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Reputation&amp;diff=9484</id>
		<title>DistOS-2011W Reputation</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Reputation&amp;diff=9484"/>
		<updated>2011-04-12T02:37:28Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* PAPER */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==PAPER==&lt;br /&gt;
&lt;br /&gt;
Our final paper can be found here (Over the past week, the grunt of our efforts are displayed on this final paper page).&lt;br /&gt;
* [[Distributed OS: Winter 2011 Reputation Systems Paper]]&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
* Waheed Ahmed&lt;br /&gt;
* Trevor Gelowsky&lt;br /&gt;
** MSN: Gelowt@gmail.com&lt;br /&gt;
** E-Mail:  tgelowsk@sce.carleton.ca&lt;br /&gt;
* Michael Du Plessis&lt;br /&gt;
* Nicolas Lessard (nick.lessard @t gmail.com / nlessard @t carleton.connect.ca)&lt;br /&gt;
&lt;br /&gt;
==Our presentation==&lt;br /&gt;
Our current presentation can be viewed at the following link: https://docs.google.com/present/edit?id=0ASS7kj9hfc1aZGRiMjMzOHJfNGhnNzhuamRr&amp;amp;hl=en&amp;amp;authkey=CMHi3KAD&lt;br /&gt;
&lt;br /&gt;
==The problem==&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
&lt;br /&gt;
==What technologies currently exist?==&lt;br /&gt;
* Digital signatures&lt;br /&gt;
** Certificates signed by trusted organizations&lt;br /&gt;
&lt;br /&gt;
* Black hole- email, spam,&lt;br /&gt;
* Google - search reputation&lt;br /&gt;
* Credit bureaus&lt;br /&gt;
* Yellow pages&lt;br /&gt;
* Better business bureau&lt;br /&gt;
* CRC - criminal records&lt;br /&gt;
&lt;br /&gt;
== What technologies don&#039;t currently exist?==&lt;br /&gt;
&lt;br /&gt;
==Guaranteeing Authenticity/Public Key Infrastructure==&lt;br /&gt;
&lt;br /&gt;
In our paper we must explain why PKI/Authentication fits into reputation. Why must it be handled by both Attribution and Reputation systems?&lt;br /&gt;
&lt;br /&gt;
===Problem Domain===&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation information being exchanged is guaranteed to be authentic.&lt;br /&gt;
&lt;br /&gt;
*How do we ensure the information exchanged between peers authentic, and not tampered with?&lt;br /&gt;
&lt;br /&gt;
*How to we attribute information exchanged?&lt;br /&gt;
&lt;br /&gt;
*How do we do this in a highly decentralized, distributed system?&lt;br /&gt;
&lt;br /&gt;
*How can we make sure the information is timely?&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
In past few years, Internet has provided platform for a global market place and both business and private users realizes that the revolutionary communications opportunities provided by it will give way to large spectrum of business and private applications.Today online users face multitude of problems and issues like vulnerability to viruses , worms , exposure to sniffers, spoofing their private sessions not only this but also they are also subjected to invasion of privacy with multitude of spy ware available for monitoring how they behave. Today over the internet different kind of activities take place ranging from access to information to entertainment, financial services, product services and even socializing. The frequent usage of internet as an important business tool led to a major increase in deliberate abuse and criminal activities. All the organization operating electronically and trading expose their own information and IT systems to a wide range of security threats. The most common protocols like IP/TCP/UDP are the main targets of potential hackers. Its all because of IPs on which attacks are possible and don&#039;t have proper authentication mechanism for any incoming data over internet.  &lt;br /&gt;
&lt;br /&gt;
In order to build secure chain of trust Public-Key Infrastructure is used for internet based communication. It consists of various things like security policy , Certificate authority , registration authority , certificate distribution system PKI enabled applications.&lt;br /&gt;
&lt;br /&gt;
===PKI===&lt;br /&gt;
With development of modern e-commerce based businesses which has minimal customer face-to-face interactions is demanding more security and integrity. The online web based stores where huge amount of transactions take place needs to ensure customers that there information is confidential and processed through a secure channel. This is where implementation of PKI steps in to provide mechanisms to ensure trusted relationships are established and maintained. The specific security functions in which a PKI can provide foundation are confidentiality, integrity, non-repudiation,and authentication.&lt;br /&gt;
&lt;br /&gt;
PKI provides mean of guaranteeing the authenticity by issuing digital signature. In order to ensure that electronic document is authentic , which means knowing the person who created the document and it hasn&#039;t been modified. They are commonly used for software distribution, financial transaction or to detect forgery or tampering.  Further to ensure the authentication digital signature relies on certain type encryption. Digital certificate is mode of encryption  on large scale  for example secure web server. Digital certificate are validated by PKI by verifying the authenticity of certificate , the validity of certificate and that the certificate is trustworthy. They are issued by an authority referred as Certification Authority. The certificate authority act as the middleman that both computer trust. This helps in avoiding man in the middle attack , certificate authority confirms that computer is in fact who they say they are and the provides the public key of each computer to the other. The digital certificate contains the public key of the entity identified in certificate , it matches a public to a particular individual, and that certificate&#039;s authenticity is guaranteed by the issuer, thus digital certificate provides solution to the problem of how to find user&#039;s public key and know that its valid.These problems are solved by a user obtaining another user&#039;s public key from the digital certificate. The user knows it is valid because a trusted certification authority has issued the certificate. For their own authentication digital certificate rely on public key cryptography. The certification authority signs the certificate with its own private as digital certificate is issued. To validate the authenticity of a digital certificate, a user can obtain that certification authority&#039;s public key and use it against the certificate to determine if it was signed by the certification authority.&lt;br /&gt;
&lt;br /&gt;
===Issues Faced by DoD using PKI===&lt;br /&gt;
I found out there are many different implementations of PKI , and they all focuses on their own issues and solutions. For example PKI used in DoD have following issues&lt;br /&gt;
&lt;br /&gt;
*Lack of PKI-enabled eCommerce applications and lack of interoperability among PKI applications&lt;br /&gt;
&lt;br /&gt;
*DoD is developing a single high assurance PKI&lt;br /&gt;
&lt;br /&gt;
*Very High Cost Impact to the EC/EB community.&lt;br /&gt;
&lt;br /&gt;
*The PKI community lacks metrics for mapping of trust models between the DoD :”high assurance” C2 and EC/EB domains&lt;br /&gt;
&lt;br /&gt;
*Education of everyone (policy maker through user) to a common level of understanding is a huge challenge.&lt;br /&gt;
&lt;br /&gt;
*While the purpose of using PKI in EC/EB is to provide additional trust to allow the Internet to serve as a vehicle for legally binding transactions , problems still exist with the methodologies associated with establishing a long-term burden of proof. Specifically, there are no widely adopted industry standards for maintenance of electronic signatures or for authenticated timestamps for record maintenance that have stood the test of time. These processes are untried and the case law has not yet been established to convince users that there are no issues with enforcement of these new processes. An additional barrier to EC/EB within this space is the current DoD Certificate policy in which DoD accepts&lt;br /&gt;
&lt;br /&gt;
===Common Issues With PKI Implementation===&lt;br /&gt;
&lt;br /&gt;
*Commercial Off-The-Shelf (COTS) versus Customised applications : The choice between COTS or customised products is usually one of cost versus usability. In case of usability the thing to be focused should be error messages. If PKI is built int o applications (transparent to users) than its fine if not than user will require to have some understanding of the use of keys, certificates, Certificate Revocation Lists (CRLs) and directories/certificate repositories so that they can make informed decisions.&lt;br /&gt;
&lt;br /&gt;
*Token Logistics (smart card): The point where keys and certificates are linked to their owner is a very critical point in a PKI. If a fraudulent certificate is issued by a registration officer and the certificate holder uses the certificate to commit a crime or prank, trust in the whole PKI hierarchy may be lost. The physical security requirements are high, and the registration officer, whether a person or a smartcard bureau, must be subject to strict security polices and practices. As it was problem with DoD mentioned in section above.&lt;br /&gt;
&lt;br /&gt;
*Network issues - Traffic : There is no doubt that the implementation of PKI will add to the network load, although just how much depends on the system architecture. Potential additional traffic that should be considered includes: Certificate issuance, Email usage, CRLs , and Directory Replication&lt;br /&gt;
&lt;br /&gt;
*Network issues - Encryption : Many organisations implement anti-virus software and content inspection on servers at the perimeter of their networks. Some have security policies that rejects or quarantines encrypted traffic. To provide user-to-user confidentiality, messages will traverse networks with their payload hidden from inspection by virus and content checking.&lt;br /&gt;
&lt;br /&gt;
*Email address in certificate :In order to use certificates for S/MIME signed/encrypted email, the users’ email address must be in the certificate. Most people change their email addresses more frequently than the certificate. Unless a solution is built which allows users to keep the same email address over a long period, certificates would have to be re-issued every time a user changes email address. S/MIME v.3 stipulates that the receiving application must check the From: or Sender: field in the mail header and compare it to an email address in the sender’s certificate. If the check does not match, the mail application should perform another explicit check to ensure that the person who signed the message is indeed the person who sent it. As usual, the ‘devil is in the detail’ when it comes to implementation.&lt;br /&gt;
&lt;br /&gt;
*Certificate Validity Checking:CRLs have been the conventional method of providing certificate validity checking. CRLs do not scale very well as discussed earlier, but are usually kept for backward compatibility, archiving/historical verification and for use in off-line mode. The other issue with CRLs is that they are generally issued at certain intervals of 6, 12 or 24 hours, causing a time lag from the time a certificate is revoked until it appears on the published CRL. This may present a security risk, as a certificate may verify correctly after it has been reported as compromised and revoked; (however some would argue that the time from actual compromise until the discovery and reporting of it would in most cases be a more significant lag). The Online Certificate Status Protocol (OCSP) (RFC2560) allows a client to query an OCSP responder for the current status of a certificate. This saves searching through a large CRL and can save bandwidth if the CRL would normally be downloaded - although it may increase network traffic. Most OCSP responders are based on CRLs and thus do not solve the problem of time lag as outlined above.&lt;br /&gt;
&lt;br /&gt;
*Availability and storage of reliable user information : For an identity certificate scheme, names in certificates need to be unique, meaningful - and correct. Few large user communities have all their member details in a central and accurate database or directory, and the exercise of consolidating, checking and updating all user data can turn into a massive and expensive exercise.&lt;br /&gt;
&lt;br /&gt;
*Archiving/historic verification : Digital signatures need to be verifiable even after the keys used to sign have expired. Likewise, we need to be able to verify that the certificate was valid at the time the datawas signed. This means we would need to archive: the signed file,the public key certificate of the signer, the CRL that was valid at the time of signing, a reliable timestamp to prove the accuracy of the time of signing and, the hardware environment that can run the software that was used at the time&lt;br /&gt;
&lt;br /&gt;
===What are the solution to these problems?===&lt;br /&gt;
&lt;br /&gt;
*Identity Based Encryption:enables senders to encrypt messages for recipients w/o requiring that a recipients key first be established, certified, and published.&lt;br /&gt;
&lt;br /&gt;
*Certificate-based encryption:it incorporates IBE methods, but uses double encryption so that its CA cant decrypt on behalf of the user.&lt;br /&gt;
&lt;br /&gt;
*Certificateless Public Key Cryptography:it incorporates IBE methods, using partial private keys so PKG cant decrypt on behalf of the user.&lt;br /&gt;
&lt;br /&gt;
*Distributed Computation:There exists methods that distribute cryptographic operations so that the cooperative contribution of a number of entities is required in order to perform an operation such as a signature or decryption. It helps in tighter protection at servers vs clients, but implies that the users mist fully trust servers to apply keys appropriately.&lt;br /&gt;
&lt;br /&gt;
*Alternative Validation Strategies:Hash tree:it offers compact protected representations of the status of large number of certiticates.Highly valued if PKI is operated large scale more benifical than Certificate Revovation List. CRL reflect ststus information at fixed intervals.&lt;br /&gt;
&lt;br /&gt;
==Dissemination==&lt;br /&gt;
&lt;br /&gt;
===The Problem Domain===&lt;br /&gt;
&lt;br /&gt;
===Random Ramblings on Reputation Management and Distribution===&lt;br /&gt;
&lt;br /&gt;
Publish/Subscribe?&lt;br /&gt;
&lt;br /&gt;
This system has unique distribution requirements as compared to most distributed systems in general.  In this system, we cannot assume that there will be a universally agreed-upon definition of good, or bad.  Similarly, the system must be self-policing.  It would be up to each and every group of autonomous systems to decide which updates to accept and reject.  Updates themselves also should not cause the network to DDoS itself.  Lastly, it would be impossible for every system to know what the reputation for a given system is.  Therefore the system must disseminate information in some way that is query-able and localizes reputation information where required.&lt;br /&gt;
&lt;br /&gt;
To this end, we need a way of spreading information that while reliable, does not depend on one universally agreed-upon set of reputations.&lt;br /&gt;
&lt;br /&gt;
For example, on an internet-scale operating system it would be entirely reasonable for one group of systems to not want to accept updates, or want to avoid communication with a given series of systems.&lt;br /&gt;
&lt;br /&gt;
Any solution would assume that the problems of attribution are solved.&lt;br /&gt;
&lt;br /&gt;
===Current Examples of Reputation Dissemination===&lt;br /&gt;
&lt;br /&gt;
The first protocol that immediately comes to mind in this situation is a gossip-based protocol.  These protocols are designed to operate in highly decentralized, large-scale systems.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a nice overview:&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4537308 &amp;quot;Reputation management in distributed systems&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Examples are as follows:&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4228013 &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=5569965 &amp;quot;Improving Accuracy and Coverage in an Internet-Deployed Reputation Mechanism&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4459326 &amp;quot;GossipTrust for Fast Reputation Aggregation in Peer-to-Peer Networks&amp;quot;&lt;br /&gt;
* http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=4777496 &amp;quot;Adaptive trust management in P2P networks using gossip protocol&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Another possibility is using &amp;quot;Reputation chains&amp;quot;&lt;br /&gt;
* http://dx.doi.org.proxy.library.carleton.ca/10.1109/TKDE.2009.45 &amp;quot;P2P Reputation Management Using Distributed Identities and Decentralized Recommendation Chains&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Maintaining History==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Problem domain===&lt;br /&gt;
&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
*** Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed?&lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
** Distributed storage systems. Reputation in real-life is stored in interactions that an entity has with others. Reputation is not stored centrally. Reputation is most often a shared view of an entity by the masses, but sometimes an entities reputation can be disjoint among the masses: many different entities having differing views of reputation for the same entity.&lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
** (should I mention this?)&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
** (should I mention this?)&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
** Impose/Emerge problem: reputation for an interaction can be calculated immediately or can be a function of time.&lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
** Trusting the masses is generally a good way of ensuring trustworthiness. Imposed rules will not always fit every situation well - could potentially set bad reputation to a &amp;quot;good&amp;quot; entity.&lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
** Do we maintain an ever-growing set of history items for interactions between entities? Do we look focus on the bad reputations?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
* What assumptions will we make?&lt;br /&gt;
* Privacy issues? What will we reveal? Will centralized systems have a know-all mentality?&lt;br /&gt;
** Fine grained information will never be revealed (privacy concerns and user rights)&lt;br /&gt;
* Which history should I maintain? What to take as important, what to disregard?&lt;br /&gt;
* Immutable data structure. Who could add data? Who could remove data? Authority&lt;br /&gt;
&lt;br /&gt;
===Reputation systems===&lt;br /&gt;
* record, aggregate, distribute information about an entity&#039;s behaviour in distributed applications&lt;br /&gt;
&lt;br /&gt;
* reputation might be based on the entity&#039;s past ability to adhere to a license agreement (mutual contract between issuer and licensee)&lt;br /&gt;
&lt;br /&gt;
===History-based access control systems===&lt;br /&gt;
* make decision based on an entity&#039;s past security-sensitive actions&lt;br /&gt;
&lt;br /&gt;
===Examples of reputation systems (trust-informing technologies)===&lt;br /&gt;
* eBay - Feedback forum (positive, neutral, negative)&lt;br /&gt;
&lt;br /&gt;
===Do reputation systems have some validity?===&lt;br /&gt;
&lt;br /&gt;
Resnick et al. argue that reputation systems&lt;br /&gt;
foster an incentive for principals to well-behave because of “the expectation of&lt;br /&gt;
reciprocity or retaliation in future interactions&lt;br /&gt;
&lt;br /&gt;
Abstractions are used to model the aggregated information of each entity. These abstractions may not encompass the full details of transactions and provide context to specific issues relating to feedback. In turn we can end up with ambiguous values.&lt;br /&gt;
&lt;br /&gt;
So we need a system that provides sufficient information in order to verify the precise properties of a past behaviour.&lt;br /&gt;
&lt;br /&gt;
* Krukow, K. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK. (March 3, 2011) [http://www.brics.dk/~krukow/research/publications/online_papers/concrete-jcs.pdf]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
Reputation systems are meta systems that record, aggregate and distribute information about principals’ behaviour in distributed applications. Similarly, history-based access control systems make decisions based&lt;br /&gt;
on programs’ past security-sensitive actions. While the applications are&lt;br /&gt;
distinct, the two types of systems are fundamentally making decisions&lt;br /&gt;
based on information about the past behaviour of an entity.&lt;br /&gt;
A logical policy-centric framework for such behaviour-based decisionmaking is presented. In the framework, principals specify policies which&lt;br /&gt;
state precise requirements on the past behaviour of other principals that&lt;br /&gt;
must be fulﬁlled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and&lt;br /&gt;
eﬃcient dynamic algorithms for checking whether a particular behaviour&lt;br /&gt;
satisﬁes a property from the language. It is shown how the framework can&lt;br /&gt;
be extended in several ways, most notably to encompass parameterized&lt;br /&gt;
events and quantiﬁcation over parameters. In an extended application, it&lt;br /&gt;
is illustrated how the framework can be applied for dynamic history-based&lt;br /&gt;
access control for safe execution of unknown and untrusted programs.&lt;br /&gt;
&lt;br /&gt;
* Khosrow-Pour, M. Emerging trends and challenges in information technology management (March 7, 2011) [http://books.google.ca/books?id=ybzS-yylJfAC&amp;amp;lpg=PA822&amp;amp;ots=V7hn_RzqXA&amp;amp;dq=maintaining%20history%20in%20reputation%20systems&amp;amp;pg=PA822#v=onepage&amp;amp;q=maintaining%20history%20in%20reputation%20systems&amp;amp;f=false]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
&lt;br /&gt;
* Bolton, G. et al. How Effective are Electronic Reputation Mechanisms?  (March 10, 2011) [http://ccs.mit.edu/dell/reputation/BKOMSsub.pdf]&lt;br /&gt;
&lt;br /&gt;
====Abstract====&lt;br /&gt;
&lt;br /&gt;
Electronic reputation or “feedback” mechanisms aim to mitigate the moral hazard problems &lt;br /&gt;
associated with exchange among strangers by providing the type of information available in &lt;br /&gt;
more traditional close-knit groups, where members are frequently involved in one another’s &lt;br /&gt;
dealings.  In this paper, we compare trading in a market with electronic feedback (as &lt;br /&gt;
implemented by many Internet markets) to a market without, as well as to a market in which the &lt;br /&gt;
same people interact with one another repeatedly (partners market).   We find that, while the &lt;br /&gt;
feedback mechanism induces quite a substantial improvement in transaction efficiency, it also &lt;br /&gt;
exhibits a kind of public goods problem in that, unlike the partners market, the benefits of trust &lt;br /&gt;
and trustworthy behavior go to the whole community and are not completely internalized.  We &lt;br /&gt;
discuss the implications of this perspective for improving these systems.&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation is generated from the information exchanged between systems and how/where it is stored.&lt;br /&gt;
&lt;br /&gt;
==Problem domain:==&lt;br /&gt;
&lt;br /&gt;
This portion of a reputation system answers the core question of how reputation is generated from the information exchanged between systems and how/where it is stored.&lt;br /&gt;
&lt;br /&gt;
Problem domain:&lt;br /&gt;
&lt;br /&gt;
•	Emerge vs. Impose reputation on the system?&lt;br /&gt;
•	Probably both, how do we account for both systems?&lt;br /&gt;
•	Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed?&lt;br /&gt;
•	Where do you store the data?&lt;br /&gt;
•	Distributed storage systems. Reputation in real-life is stored in interactions that an entity has with others. Reputation is not stored centrally. Reputation is most often a shared view of an entity by the masses, but sometimes an entities reputation can be disjoint among the masses: many different entities having differing views of reputation for the same entity.&lt;br /&gt;
•	Who do we trust for this information?&lt;br /&gt;
•	Trusting the masses is generally a good way of ensuring trustworthiness. Imposed rules will not always fit every situation well - could potentially set bad reputation to a &amp;quot;good&amp;quot; entity.&lt;br /&gt;
•	Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
•	Do we maintain an ever-growing set of history items for interactions between entities? Do we look focus on the bad reputations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Existing systems&lt;br /&gt;
•	Peer-based systems (emerge)&lt;br /&gt;
•	eBay - positive/negative rating system&lt;br /&gt;
•	Youtube - like/dislike/spam comment system&lt;br /&gt;
•	Policy-based systems (impose)&lt;br /&gt;
•	Java - policy based security&lt;br /&gt;
•	Android - policy based security&lt;br /&gt;
These two systems are on opposite ends of the Emerge-Impose spectrum.&lt;br /&gt;
EigenTrust system&lt;br /&gt;
The EigenTrust system utilizes a numerical scale for reputation storage.&lt;br /&gt;
Advantages:&lt;br /&gt;
•	Numerical values are easy to compare.&lt;br /&gt;
•	Little required storage space.&lt;br /&gt;
Disadvantages:&lt;br /&gt;
•	Information is lost in the abstraction process.&lt;br /&gt;
o	No concrete data&lt;br /&gt;
•	Ambiguity&lt;br /&gt;
Storing concrete data&lt;br /&gt;
Hence, with the given information from the reputation system, we cannot generate an accurate profile of the entity.&lt;br /&gt;
We need a system that represents reputation in a concrete form&lt;br /&gt;
“If principal p gains access to resource r at time t, then the past behavior of p up until time t satisfies requirement ψr.”&lt;br /&gt;
Advantages&lt;br /&gt;
•	A sufficient amount of information is available to come up with a profile of an entity&lt;br /&gt;
Disadvantages&lt;br /&gt;
•	Storage space&lt;br /&gt;
Shmatikov and Talcott&lt;br /&gt;
Histories are sets of time-stamped events. Reputation is based on ability to adhere to licenses.&lt;br /&gt;
Licenses might permit certain actions OR require certain actions from being performed.&lt;br /&gt;
Advantages:&lt;br /&gt;
•	Store data in concrete form&lt;br /&gt;
Disadvantages:&lt;br /&gt;
•	No notion of sessions (logically connected set of events)&lt;br /&gt;
Representation of reputation&lt;br /&gt;
If we consider reputation information to encompass the events and actions of an entity, then we can model reputation as a set of events.&lt;br /&gt;
An interesting new problem is how to re-evaluate policies efficiently when new information becomes available.&lt;br /&gt;
This problem is known as “dynamic model-checking”. &lt;br /&gt;
Dynamic model-checking&lt;br /&gt;
We want a way of summarizing past reputations.&lt;br /&gt;
A solution here is to use: Havelund and Rosu, based on the technique of dynamic programming, used for runtime verification&lt;br /&gt;
•	Given some information on an entity, how do we convert/abstract this to reputation? &lt;br /&gt;
•	Is all the information necessary to maintain? &lt;br /&gt;
•	Now that we have the reputation information, what can we do with it?&lt;br /&gt;
o	What can we compare it with?&lt;br /&gt;
Implementation&lt;br /&gt;
Desired functionality of the system:&lt;br /&gt;
new()&lt;br /&gt;
- Append new reputation information&lt;br /&gt;
update()&lt;br /&gt;
-	Update and summarize past behavior&lt;br /&gt;
-	This is a reduce function&lt;br /&gt;
check()&lt;br /&gt;
- Analyze whether the given reputation satisfies the criteria of the policy&lt;br /&gt;
&lt;br /&gt;
==Querying Reputation==&lt;br /&gt;
&lt;br /&gt;
=== Problems ===&lt;br /&gt;
&lt;br /&gt;
* Emerge vs. Impose reputation on the system?&lt;br /&gt;
** Probably both, how do we account for both systems?&lt;br /&gt;
***If you want to know someone&#039;s reputation, you either need to start asking around for it, imposing yourself. Or you need the data to be sent around, so you already have access to it; emergent. &lt;br /&gt;
* Where do you store the data?&lt;br /&gt;
***You need to know who has the data to ask them for it, or to go get it yourself. &lt;br /&gt;
* Where is the data queried from?&lt;br /&gt;
***First you need to know who&#039;s storing it. then you need to know if you&#039;re allowed to ask that node directly, do you ask a intermediary keeper of data. Will you even need to Query-- that is, do you already have all you need to know on hand? you need not get the latest updates on a node if every other node who&#039;s ever talked to it got DDOSed. (or do you?)&lt;br /&gt;
* What defines good/bad reputation?&lt;br /&gt;
***Should I make my own definition for bad reputation, and query if someone engaged in activities I consider bad, or should their be a global agreed upon reputation?&lt;br /&gt;
* Who provides the good/bad reputation?&lt;br /&gt;
***Who should I ask for information from? &lt;br /&gt;
* Who do we trust for this information?&lt;br /&gt;
***Whoever you trust, presumably their opinion on a given node is more important then a node you trust less. &lt;br /&gt;
* Should reputation be mutable? Can we be pardoned, or can reputations be reversed?&lt;br /&gt;
***topically, would you bother asking for 10 year old reputation data on a node, if it&#039;s been a model citizen for the last 9?&lt;br /&gt;
* What entities are able to contribute to reputations?&lt;br /&gt;
***Should I ask everyone I trust for an opinion on a given node, or just certain keepers of trust data?  &lt;br /&gt;
* How do we access reputation about entities?&lt;br /&gt;
***You query someone in the know who you trust and are allowed to query. &lt;br /&gt;
***you could say, ask everyone you know and trust, and ask them to ask people they know and trust, (and so on...if they&#039;re willing) until you find a node with the information you need.&lt;br /&gt;
***in a more centralized system you need to ask some kind of keeper of information for the information you want, and that keeper may or may not provide you with the reputation info you want. &lt;br /&gt;
* Who is authorized to access particular reputations? How much to reveal? (Information flow)&lt;br /&gt;
***The ability to control this would depend on how centralized a system you have. In a truly distributed system where every node has an opinion on any other node they&#039;ve talked to you&#039;ll be able to find somebody who can tell you about the CIA node, but in a more centralized system the keepers of information might be less...willing to give Joe 6 cores information on who Iran is DDOSing. &lt;br /&gt;
&lt;br /&gt;
===Maybe References===&lt;br /&gt;
http://www.kirkarts.com/wiki/images/1/13/Resnick_eBay.pdf - &#039;&#039;Trust Among Strangers in Internet Transactions:&lt;br /&gt;
Empirical Analysis of eBay’s Reputation System&#039;&#039; (maybe not too relevant)&lt;br /&gt;
&lt;br /&gt;
http://portal.acm.org/citation.cfm?id=544741.544809 - &#039;&#039;An Evidential Model of Distributed Reputation Management&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://portal.acm.org/citation.cfm?id=775152.775242&amp;amp;type=series%EF%BF%BD%C3%9C -- &#039;&#039;The EigenTrust Algorithm for Reputation Management in&lt;br /&gt;
P2P Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.2297&amp;amp;rep=rep1&amp;amp;type=pdf -- &#039;&#039;A Robust Reputation System for Mobile Ad-hoc&lt;br /&gt;
Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8729&amp;amp;rep=rep1&amp;amp;type=pdf -- &#039;&#039;EigenRep: Reputation Management in P2P Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf -- &#039;&#039;Reputation Estimation and Query in Peer-to-Peer Networks&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is another paper that might be interesting for you. -- Lester&lt;br /&gt;
http://dcg.ethz.ch/publications/netecon06.pdf&lt;br /&gt;
&lt;br /&gt;
==Possible implementations==&lt;br /&gt;
==Implementation Requirements==&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Joel Weise : &amp;quot;Public Key Infrastructure Overview &amp;quot; http://www.sun.com/blueprints/0801/publickey.pdf Accessed 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
* Security Glossary : http://www.cafesoft.com/support/security-glossary.html Accessed on 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
* Mattila, Anssi; and Mattila, Minna &amp;quot;What is the Effect of Product Attributes on Public-Key Infrastructure adoption? &amp;quot; http://internetjournals.net/journals/tir/2006/January/Paper%2003.pdf Accessed on 2nd March 2011&lt;br /&gt;
&lt;br /&gt;
*Electronic Commerece Conference , PKI Sub-Group ,  Issue Paper : http://www.defense.gov/dodreform/ecwg/pki.pdf date accessed 5th March 2011&lt;br /&gt;
&lt;br /&gt;
*SANS Institute InfoSec Reading Room, Common issues in PKI implementations - climbing the Slope of Enlightenment : http://www.sans.org/reading_room/whitepapers/authentication/common-issues-pki-implementations-climbing-slope-enlightenment_1198 date accessed 15th March 2011&lt;br /&gt;
&lt;br /&gt;
*&amp;quot;Understanding Digital Certificate&amp;quot; by Microsoft : http://technet.microsoft.com/en-us/library/bb123848%28EXCHG.65%29.aspx date accessed 3rd April 2011&lt;br /&gt;
&lt;br /&gt;
*&amp;quot;How digital certificate works&amp;quot; by IBM: http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/index.jsp?topic=/com.ibm.mq.csqzas.doc/sy10580_.htm date accessed 3rd April 2011&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9474</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9474"/>
		<updated>2011-04-12T02:06:23Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. This occurrence would depend on a variety of factors. First, if a piece of reputation information was requested frequently from other nodes, the information would be regarded as highly valuable and therefore kept for future reference. If a piece of reputation information was very infrequently used, it might be remove or labelled for deletion at some future point. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing systems to maintain a level of forgiveness.&lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tend to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.&amp;lt;ref name=&amp;quot;repest&amp;quot;&amp;gt;Xing Jin, S.-H. Gary Chan, &amp;quot;Reputation Estimation and Query in&lt;br /&gt;
Peer-to-Peer Networks&amp;quot;, IEEE Communications Magazine, April 2010. http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf &amp;lt;/ref&amp;gt; In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. &amp;lt;ref name=&amp;quot;repest&amp;quot; /&amp;gt; Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.&amp;lt;ref name=&amp;quot;EviMod&amp;quot;&amp;gt;Bin Yu, Munindar P. Singh, &amp;quot;An Evidential Model of Distributed Reputation Management&amp;quot;, AAMAS’02, July 19, 2002, http://portal.acm.org/citation.cfm?id=544809&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&amp;amp;retn=1#Fulltext &amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
&lt;br /&gt;
The implementation and deployment of such a reputation system is a very difficult task. Ideally, all systems would simultaneously switch over to a new protocol for reputation management. On a distributed system as large as the web, this is highly improbable. Typically, the success of updates and layers built on top of the web&#039;s existing architecture comes down to the fact that they are incrementally deployable. Updates are incremental and so the entire system is not succumbed to a system-wide blackout.&lt;br /&gt;
&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
The key question is whether we can deploy this reputation system using incremental updates.&lt;br /&gt;
&lt;br /&gt;
Basically, phasing this in will rely on companies deciding that it is in their own best interests to have this running locally. Individuals part of the greater organization would then have to decide to switch to the gossip-based solution. Eventually, an emergent and cohesive system would appear.Reputation is currently facilitated by justice systems and imposed rules for entities within systems. We can continue to use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the environment and eventually have a full-fledged emergent reputation system.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9449</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9449"/>
		<updated>2011-04-11T23:41:04Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* DELETE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. This occurrence would depend on a variety of factors. First, if a piece of reputation information was requested frequently from other nodes, the information would be regarded as highly valuable and therefore kept for future reference. If a piece of reputation information was very infrequently used, it might be remove or labelled for deletion at some future point. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing systems to maintain a level of forgiveness.&lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tend to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.[A]In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. [A] Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.[B] &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say. &lt;br /&gt;
&amp;lt;That should tie it together but again that&#039;s only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.&amp;gt; &lt;br /&gt;
&amp;lt;Trevor:  This is mostly a justice thing.  We need to talk about it, but I also listed it as an assumption above.  But yes, this is in general the right idea&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Trevor:  Basically, phasing this in will reply on companies deciding it&#039;s in their own best interests to have this running locally, and then individuals will have to decide to use the gossip-based solution, and eventually, emergent, a cohesive system would appear.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;--I&#039;LL DO THESE UP PROPERLY SOON--Nick&amp;lt;&lt;br /&gt;
&lt;br /&gt;
[A] http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf&lt;br /&gt;
&lt;br /&gt;
[B] http://delivery.acm.org.proxy.library.carleton.ca/10.1145/550000/544809/p294-yu.pdf?key1=544809&amp;amp;key2=3913452031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9448</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9448"/>
		<updated>2011-04-11T23:40:02Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation disseminated? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. This occurrence would depend on a variety of factors. First, if a piece of reputation information was requested frequently from other nodes, the information would be regarded as highly valuable and therefore kept for future reference. If a piece of reputation information was very infrequently used, it might be remove or labelled for deletion at some future point. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing systems to maintain a level of forgiveness.&lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tend to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.[A]In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. [A] Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.[B] &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say. &lt;br /&gt;
&amp;lt;That should tie it together but again that&#039;s only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.&amp;gt; &lt;br /&gt;
&amp;lt;Trevor:  This is mostly a justice thing.  We need to talk about it, but I also listed it as an assumption above.  But yes, this is in general the right idea&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Trevor:  Basically, phasing this in will reply on companies deciding it&#039;s in their own best interests to have this running locally, and then individuals will have to decide to use the gossip-based solution, and eventually, emergent, a cohesive system would appear.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;--I&#039;LL DO THESE UP PROPERLY SOON--Nick&amp;lt;&lt;br /&gt;
&lt;br /&gt;
[A] http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf&lt;br /&gt;
&lt;br /&gt;
[B] http://delivery.acm.org.proxy.library.carleton.ca/10.1145/550000/544809/p294-yu.pdf?key1=544809&amp;amp;key2=3913452031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be omitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELIEVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9447</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9447"/>
		<updated>2011-04-11T23:35:26Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Where do we store reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective. This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information. These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model). These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded. This occurrence would depend on a variety of factors. First, if a piece of reputation information was requested frequently from other nodes, the information would be regarded as highly valuable and therefore kept for future reference. If a piece of reputation information was very infrequently used, it might be remove or labelled for deletion at some future point. Essentially, the more important or relevant a piece of information is, the more likely it is to be stored. This provides good localization and excellent overall reliability of information, while still allowing systems to maintain a level of forgiveness.&lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tent to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.  &lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.[A]In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. [A] Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.[B] &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say. &lt;br /&gt;
&amp;lt;That should tie it together but again that&#039;s only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.&amp;gt; &lt;br /&gt;
&amp;lt;Trevor:  This is mostly a justice thing.  We need to talk about it, but I also listed it as an assumption above.  But yes, this is in general the right idea&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Trevor:  Basically, phasing this in will reply on companies deciding it&#039;s in their own best interests to have this running locally, and then individuals will have to decide to use the gossip-based solution, and eventually, emergent, a cohesive system would appear.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;--I&#039;LL DO THESE UP PROPERLY SOON--Nick&amp;lt;&lt;br /&gt;
&lt;br /&gt;
[A] http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf&lt;br /&gt;
&lt;br /&gt;
[B] http://delivery.acm.org.proxy.library.carleton.ca/10.1145/550000/544809/p294-yu.pdf?key1=544809&amp;amp;key2=3913452031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be omitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELIEVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9446</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9446"/>
		<updated>2011-04-11T23:29:01Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Our assumptions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result. It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system. Now, in this context, majority takes on a very specific meaning. Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes. This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective.  This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information.  These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model).  These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded.  When this occurs would depend on a variety of factors.  First, if a piece of reputation information was requested a lot from other nodes, or it indicated an extreme state.  Essentially, the more important or relevant a piece of information is, the more likely it is to be stored.  This provides good localization and excellent overall reliability of information, while still allowing given systems to be forgiven.  &lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tent to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.  &lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.[A]In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. [A] Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.[B] &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say. &lt;br /&gt;
&amp;lt;That should tie it together but again that&#039;s only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.&amp;gt; &lt;br /&gt;
&amp;lt;Trevor:  This is mostly a justice thing.  We need to talk about it, but I also listed it as an assumption above.  But yes, this is in general the right idea&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Trevor:  Basically, phasing this in will reply on companies deciding it&#039;s in their own best interests to have this running locally, and then individuals will have to decide to use the gossip-based solution, and eventually, emergent, a cohesive system would appear.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;--I&#039;LL DO THESE UP PROPERLY SOON--Nick&amp;lt;&lt;br /&gt;
&lt;br /&gt;
[A] http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf&lt;br /&gt;
&lt;br /&gt;
[B] http://delivery.acm.org.proxy.library.carleton.ca/10.1145/550000/544809/p294-yu.pdf?key1=544809&amp;amp;key2=3913452031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be omitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELIEVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9445</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9445"/>
		<updated>2011-04-11T23:28:36Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Our assumptions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&lt;br /&gt;
In this system, we will make a set of assumptions. Without these, a system of this size either would not function or would be too broad, in terms of scope, to ever be acceptable.&lt;br /&gt;
&lt;br /&gt;
The justice assumption is where the assumption is made that some other system or set of rules will govern when reputation information needs to be updated and exchanged.  Our system will not determine when exchange of information is required, only what information should be exchanged. Similarly, since each system will likely have its own perspective on what is right and wrong, no assumption will be made that there is a single fixed set of rules governing the operation of the system of justice on the whole. This means that the system should be adaptable to different purposes without compromising the integrity of the internet at large. Two opposing systems of justice issuing opposing reputation information will eventually result in the two segments of the network ignoring the opposing information, leading to an eventual stable, and consistent, state. This is appropriate, given the diversity of the internet at large.&lt;br /&gt;
&lt;br /&gt;
In the attribution assumption it is assumed that all actions are being correctly attributed. This also includes assuming that information being exchanged between two peers can be properly sourced. Originally, a section on public-key infrastructure (PKI) was going to be included, but it was decided that this would be ultimately out of scope for this system.&lt;br /&gt;
&lt;br /&gt;
In order to make sure that a system of this scale is feasible, it is necessary to make a public good assumption. This means that it will be assumed that resources are available on the whole system to maintain the reputation information necessary for the system to function. This assumption is generally valid considering the capacity of the modern internet, and the exponential growth of technology.&lt;br /&gt;
&lt;br /&gt;
Finally the security in the majority assumption is made. It is assumed that in a sufficiently large system, even if a given number of nodes are currently acting maliciously, the large number of non-malicious nodes will eventually overwhelm the fraudulent messages resulting in a generally good result.  It would be impossible to design a system that did not rely on this assumption, since if a majority of the nodes were acting against the general good of the system, it would fail regardless of the overall safety of the system.  Now, in this context, majority takes on a very specific meaning.  Since, for obvious reasons, each node is only going to trust trustworthy nodes, it is the case where we are going to rely on the security in the majority of the opinions of trusted nodes.  This will give the system its own kind of inertia, helping to safeguard the system against gaming in the long term.&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
&lt;br /&gt;
Gathering reputation information in these kinds of systems will generally follow a push model.  When a node receives reputation information deemed important and reliable enough to be disseminated, it will then push the information to it&#039;s peers, or superiors.  This system can either be automated, or policy-based.  &lt;br /&gt;
&lt;br /&gt;
In the case where reputation information for a given system is required the information would be queried as outlined below, then stored and/or disseminated to its peers if deemed important enough.  What constitutes &amp;quot;important enough&amp;quot; will vary depending on the specific context, but either way the information would be retrieved, and stored until deemed no longer relevant, and then discarded.&lt;br /&gt;
&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation information will be stored at each individual host giving every system or group of systems their own perspective.  This is both appropriate, and efficient given how each system or grouping of systems is likely to have a different objective and context.&lt;br /&gt;
&lt;br /&gt;
Some hosts may also, optionally, act as repositories for this information.  These might be elected (in an emergent system) or imposed (in a hierarchy, or publish-subscribe model).  These systems will provide a public good, in that they will become query-able repositories of information.&lt;br /&gt;
&lt;br /&gt;
It would be impractical for information to be stored at every node indefinitely, and eventually given reputation entries must be discarded.  When this occurs would depend on a variety of factors.  First, if a piece of reputation information was requested a lot from other nodes, or it indicated an extreme state.  Essentially, the more important or relevant a piece of information is, the more likely it is to be stored.  This provides good localization and excellent overall reliability of information, while still allowing given systems to be forgiven.  &lt;br /&gt;
&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely in case justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  This vital exchange of information is what allows these systems to function.  Ideally, methods of information exchange should provide a given set of features.  First, the information needs to be reliable, and this means that it needs to be as immune as possible to gaming and stored securely.  Second, there needs to be good localization of the data to ensure it is where it is needed, when it is needed.  Finally the system needs to be scalable and flexible.  While the afore mentioned reasons form the technical requirements of the system, there is one additional non-functional requirement that must be considered:  level of trust.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this technology in practice is the domain name system (or DNS, for short).  These systems allow for a great deal of control over the information in the system, at the expense of scalability and flexibility.  These systems are very common in the corporate world today, and align well with organizational structure.  It also means that if a flaw is detected at the information, manual intervention is possible.  Unfortunately, these systems tent to be rife with single points of failure, and scalability issues.  In addition, implementing this kind of a system on an internet-scale would mean designating a single authority for all reputation information, which would form a natural bottleneck despite advances in caching.  finally, there would be the issue of trust in such a system.  While hierarchies are ideal where an overall system architecture is imposed and trust is mandated, they are much less palatable on the internet-scale because it would be impossible to establish a single authority that everyone would trust.  Also, if there are a single sets of authorities, then there is the added issue of security.  Compromising one system would taint the reputation information across the entire reputation system.&lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.  Common examples of these in technology include Really Simple Syndication (RSS) feeds, bulletin board systems (BBS).  Outside modern technology, analogies can be drawn between the publish/subscribe model and common sources of information like newspapers, magazines, and other forms of periodicals.  First the source publishes an update, and then &amp;quot;subscribers&amp;quot; can receive updates through either a push from the publisher, or a query for updates.  This technology has a couple of attractive features, and has been broadly researched over the last 10 years, especially in the area of how this technique can be applied to wireless networks &amp;lt;ref name=&amp;quot;wifipublishsubscribe&amp;quot;&amp;gt;Gajic, B.; Riihijärvi, J.; Mähönen, P.; , &amp;quot;Evaluation of publish-subscribe based communication over WiMAX network,&amp;quot; Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2010 International Congress on , vol., no., pp.38-43, 18-20 Oct. 2010 &amp;lt;/ref&amp;gt;.  Being data-centric, they can be a very helpful way of exchanging information.  Unfortunately they require some kind of a fixed infrastructure in most cases, using either fixed reference points (like a base station) or elected coordinating nodes arranged in a distributed hash table (DHT) &amp;lt;ref name=&amp;quot;p2ppublishsubscribe&amp;quot;&amp;gt;Dongcai Shi; Jianwei Yin; Zhaohui Wu; Jinxiang Dong; , &amp;quot;A Peer-to-Peer Approach to Large-Scale Content-Based Publish-Subscribe,&amp;quot; Web Intelligence and Intelligent Agent Technology Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on , vol., no., pp.172-175, 18-22 Dec. 2006&amp;lt;/ref&amp;gt;.  Unfortunately, there are some drawbacks to these technologies.  Mainly it involves some pre-selected, or elected nodes that act as authorities.  This creates points of failure, and means that some nodes need to trust others with their authority information.  While it is entirely possible that there will be publish-subscribe components in a complete reputation system, the information from such information repositories must be interpreted within the context of the source node&#039;s reputation.  This means that if a given information repository has been a source of unreliable information in the past, then its own negative reputation would likely force most other nodes to disregard the information, further diminishing the possible benefits of hosting such a repository.  These types of systems also do not provide good localization of data, meaning nodes may have to search longer for relevant information leading to greater overhead and latency in the system on a whole.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  While there are many ways to exchange information in a peer-to-peer fashion, gossiping is the most relevant of these &amp;lt;ref name=&amp;quot;gossipreputation&amp;quot;&amp;gt; Zhou, R.; Hwang, K.; , &amp;quot;Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks,&amp;quot; Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp.1-10, 26-30 March 2007 &amp;lt;/ref&amp;gt;.  In a gossip-based system, sets of peers exchange information in a semi-random way.  It has been found in practice that this system of information exchange provides not only good localization, but also excellent scalability.  The major issues surrounding gossip-based systems are that information for &amp;quot;far away&amp;quot; nodes would need to be queried, and there is the possibility of fraudulent information being exchanged (meaning that the system would have to rely on the safety of the consensus of the majority).  The disadvantage to such a system is that it is unstructured, and if an error is propagated, it can take a while for a corrected, consistent picture to appear across the network.&lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.  Very few governments or organizations would be willing to support a system where they are required to accept updates from the cloud blindly, and similarly it is very unlikely that such organizations would be willing to publish or otherwise share information with the cloud at large.  This means that any dissemination solution would have to be a hybrid solution allowing for the definition of fixed, strict hierarchies as well as the immensely scalable and dynamic peer-to-peer solutions.  Where the line between these two will be drawn is not fixed.  Some organizations may opt to make almost all information public, while others may not, and allow no external information to be published externally.  &lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In this scheme nothing is lost if a node were to leave the network.[A]In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. This would be similar to the XREP system used in some peer-to-peer file sharing networks, Which can “Query” and “Poll” peers to decide who to obtain resources from. [A] Another similar concept is a “TrustNet”, wherein an “Agent”, after determining another “Agent” isn&#039;t already acquainted with him, will query all his Neighbours on the secondary agents trustworthiness.[B] &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. In short, the process and details of actually making the decision are not that important, as long as what&#039;s decided upon is something that other entities can understand. That is, using a collection of evidence that&#039;s been stored to form an opinion that other entities can query you on, and deciding whether or not and under what conditions to connect to the other entity. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say. &lt;br /&gt;
&amp;lt;That should tie it together but again that&#039;s only how I would do it. ie what goes in and what come out need to be standardized but what happens in the middle is completely arbitrary. Not exactly an original idea or anything.&amp;gt; &lt;br /&gt;
&amp;lt;Trevor:  This is mostly a justice thing.  We need to talk about it, but I also listed it as an assumption above.  But yes, this is in general the right idea&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;Trevor:  Basically, phasing this in will reply on companies deciding it&#039;s in their own best interests to have this running locally, and then individuals will have to decide to use the gossip-based solution, and eventually, emergent, a cohesive system would appear.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;--I&#039;LL DO THESE UP PROPERLY SOON--Nick&amp;lt;&lt;br /&gt;
&lt;br /&gt;
[A] http://www.chennaisunday.com/ieee%202010/Reputation%20Estimation%20and%20Query%20in%20Peer-to-Peer%20Networks.pdf&lt;br /&gt;
&lt;br /&gt;
[B] http://delivery.acm.org.proxy.library.carleton.ca/10.1145/550000/544809/p294-yu.pdf?key1=544809&amp;amp;key2=3913452031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=17527626&amp;amp;CFTOKEN=24792561&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be omitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELIEVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9197</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9197"/>
		<updated>2011-04-10T16:00:02Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we make decisions based on reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
~~updating it now one part at a time~~&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this in practice is the domain name system (or DNS, for short).  &lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  &lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
The distributed querying is a little more complex. The querying node will need  to decide whom to ask, perhaps asking nodes it trusts if it&#039;s been operating in the reputation system for a while, or just any nearby node in general. It will perhaps ask for just a quick reputation value, or maybe a snapshot of relevant historical events. In any case, it will use the evidence collected (if any) to ultimately make a decision. In a way this node is it&#039;s own authority node. &lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
Every entity will have its own interpretation of reputation data. There will most likely be a common  set of events considered bad for essentially any system, such as one entity participating in a DDOS on another entity, the distribution of malware, and so on. Other things are more abstract and unique to certain groups. Things like distributing unverifiable claims might be considered a negative reputation event by a reputable news source, perfectly acceptable by a tabloid, and irrelevant to the average entity representing a single person&#039;s personal computer. Entities will need to decide what&#039;s important to them, most likely via a human defining which events are worth taking note of and which aren&#039;t. It is entirely possible, and likely, that different entities won&#039;t record events that other entities would consider noteworthy. It would therefore be beneficial to have multiple people using the same rule set (though not completely useless, as you can still record personal instances of these events for your own history store).&lt;br /&gt;
&lt;br /&gt;
Once an entity has obtained this information, either via the regular process of dissemination, querying, or witnessing an event firsthand, it needs to make a decision. This is, ultimately, very open ended and up to each entity. For example, A very simple mechanism would be to only communicate with entities that have no negative reputation events of any kind, and that are only viewed neutrally or positively by other entities. Another would be to ignore other entities opinions, assign a weight to each type of reputation event and do a calculation based on the evidence. However these are only two options among many, there is no need for a standardized process. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;Is that the idea?&amp;gt; looks good - maybe wrap up the idea at the end. we&#039;ll see what Trevor has to say.&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9180</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9180"/>
		<updated>2011-04-10T00:32:20Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
~~updating it now one part at a time~~&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this in practice is the domain name system (or DNS, for short).  &lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  &lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way, the response received from the superior node will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
-almost there now, at least in draft form.&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9179</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9179"/>
		<updated>2011-04-10T00:31:26Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
&lt;br /&gt;
~~updating it now one part at a time~~&lt;br /&gt;
&lt;br /&gt;
The dissemination of reputation information is a core concern of reputation systems in general.  &lt;br /&gt;
&lt;br /&gt;
In general, there are three common modes of disseminating information of this type that would need to be supported in order to make a reputation system feasible:  Hierarchy, Publish/Subscribe, and Peer-to-Peer.&lt;br /&gt;
&lt;br /&gt;
In a hierarchy, there are pre-set, or elected nodes that are responsible for maintaining an authoritative list.  A good example of this in practice is the domain name system (or DNS, for short).  &lt;br /&gt;
&lt;br /&gt;
Publish/subscribe is a model of dissemination of information that relies on central repositories, which are then queried by each client when an update is needed.&lt;br /&gt;
&lt;br /&gt;
Finally Peer-to-peer is, perhaps, the newest method of disseminating information.  &lt;br /&gt;
&lt;br /&gt;
In application, all of these methods of information dissemination would likely need to be supported in some fashion.&lt;br /&gt;
&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or interpretation of the reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers and analyzed to determine whether to connect or not. &lt;br /&gt;
&lt;br /&gt;
The actual process of querying should be fairly simple. A given entity or node in the system needs to decide if it should contact another node in the system. First, it must check its local representation of reputation data to see if it already has both enough, and up-to-date information on a node. If it does, it can move toward making a decision, which is discussed later. If however, the information needed is not already held by the node, it will need to be queried. &lt;br /&gt;
&lt;br /&gt;
This brings us back to the two primary types of reputation systems, hierarchical and distributed. In a hierarchical system the process is incredibly simple: ask your superior node, and wait for a response. The superior node might have enough information on hand to decide, or it might ask its peers or superiors. Either way whatever is sent back will be used by the original querying node.&lt;br /&gt;
&lt;br /&gt;
-almost there now, at least in draft form.&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9176</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9176"/>
		<updated>2011-04-09T21:05:48Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used in a distributed environment?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9175</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9175"/>
		<updated>2011-04-09T21:04:32Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system&amp;lt;ref name=&amp;quot;javapolicy&amp;quot;&amp;gt;Default Policy Implementation and Policy File Syntax. Oracle. http://download.oracle.com/javase/1.3/docs/guide/security/PolicyFiles.html [March 7, 2011]&amp;lt;/ref&amp;gt;. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9174</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9174"/>
		<updated>2011-04-09T21:01:36Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we represent reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9173</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9173"/>
		<updated>2011-04-09T20:59:58Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we represent reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems&amp;lt;ref name=&amp;quot;krokow&amp;quot; /&amp;gt;. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott&amp;lt;ref name=&amp;quot;krokow&amp;quot; /&amp;gt;. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9172</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9172"/>
		<updated>2011-04-09T20:58:56Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]&amp;lt;/ref&amp;gt;. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9171</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9171"/>
		<updated>2011-04-09T20:58:26Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu&amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm&amp;lt;ref name=&amp;quot;mapreduce&amp;quot;&amp;gt;Dean J. et al. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html [March 3, 2011]. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9170</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9170"/>
		<updated>2011-04-09T20:55:50Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;Krukow K. et al. A Logical Framework for Reputation Systems and History-based Access Control. School of Electronics and Computer Science University of Southampton, UK [March 3, 2011]&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9169</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9169"/>
		<updated>2011-04-09T20:53:12Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9168</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9168"/>
		<updated>2011-04-09T20:52:43Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot;&amp;gt;Reputation Management. Wikipedia. http://en.wikipedia.org/wiki/Reputation_management [March 28, 2011]. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file&amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order&amp;lt;ref name=&amp;quot;ebayreputation&amp;quot; /&amp;gt;. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9167</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9167"/>
		<updated>2011-04-09T20:48:35Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file &amp;lt;ref name=&amp;quot;android&amp;quot;&amp;gt;Android. Google. http://developer.android.com/index.html [March 28, 2011]&amp;lt;/ref&amp;gt;. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS&amp;lt;ref name=&amp;quot;ios&amp;quot;&amp;gt;iOS Developer Guide. Apple. http://developer.apple.com/devcenter/ios/index.action [March 28, 2011]&amp;lt;/ref&amp;gt; also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9166</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9166"/>
		<updated>2011-04-09T20:44:44Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What is reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others &amp;lt;ref name=&amp;quot;krukow&amp;quot; /&amp;gt;. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9165</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9165"/>
		<updated>2011-04-09T20:43:35Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=DELETE=&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9164</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9164"/>
		<updated>2011-04-09T20:42:51Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity &amp;lt;ref name=&amp;quot;krukow&amp;quot;&amp;gt;E. Miller, The Sun, (New York: Academic Press, 2005), 23-5.&amp;lt;/ref&amp;gt;. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9163</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9163"/>
		<updated>2011-04-09T20:36:14Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities (this is an append function) and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9162</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9162"/>
		<updated>2011-04-09T20:34:35Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Can we achieve this through incremental updates? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;possible we can... we can use imposed rules or existing infrastructure if we don&#039;t have adequate emergent information. This way we can incrementally update the system and eventually we will have a full-fledged emergent reputation system. Hope this helps someone... I don&#039;t quite know enough to write about this.&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9161</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9161"/>
		<updated>2011-04-09T20:31:40Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident. This solution will work quite well as we maintain a sufficient amount of useful concrete information, yet still save space by merging and combining certain types of data. If we can assume that space will never be an issue or that processing time for searching through sets of reputation history items is negligible, then we would clearly not have to worry about implementing this type of &amp;quot;reduce&amp;quot; mechanism.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9160</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9160"/>
		<updated>2011-04-09T20:26:04Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system. A solution here is to use the notion of Dynamic Model-Checking, by Havelund and Rosu. They came up with a way to re-evaluate stored reputation history and efficiently aggregate and combine eligible data. This can be thought of as a &amp;quot;reduce&amp;quot; function in the sense of Google&#039;s Map/Reduce algorithm. We generate and store sets of events related to particular entities and use a reduce function to minimize the storage space required. We realize, however, that some data will not be eligible to be &amp;quot;reduced&amp;quot;. Significant negative reputation, for instance, such as DDoS attacks will likely need to be retained indefinitely incase justice systems need sufficient proof of a specific incident.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9159</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9159"/>
		<updated>2011-04-09T20:14:24Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we represent reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this; we tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9158</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9158"/>
		<updated>2011-04-09T20:11:34Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we represent reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
&lt;br /&gt;
Reputation data can be stored in a variety of different forms and representations. We start with a summary of previous attempts in creating a solution for representing reputation. A frequently used form is one that utilizes a numerical scale for reputation. These are known as EigenTrust systems. In their essence, they store and aggregate data into a numerical form. These values are easy to compare and because primitive data types can be used, they require very little storage space. Despite these lucrative advantages, there are some significant negative aspects of such a system. Firstly, information is typically lost in the abstraction process. Concrete data is acquired and then converted down to a minimal form. Once this conversion is done, there is little one can do to understand the concrete data that it was generated from. In other words, this abstraction process is irreversible. Likewise, the process can result in ambiguity among data. For example, a reputation of 0 might be interpreted as having no reputation history or having an average reputation rating of 0. And, of course, as a result of the irreversibility of numerical data, we cannot return the data to its original concrete form to better understand the reasons behind the reputation.&lt;br /&gt;
&lt;br /&gt;
Another interesting form of reputation is one that was proposed by Shmatikov and Talcott. They attributed reputation to encompass the history of entities as a set of time-stamped events. The key difference between EigenTrust and their solution is that we can store data in its concrete form. Additionally, if we modify their solution to allow for the notion of sessions, we can generate a clear view of related actions that correspond to an entity&#039;s computational session. This provides a querying entity or a justice system with crucial information to make their respective decision. Clearly, there are some ethical and privacy issues that arrise from this. We tackle this issue more closely in a following section.&lt;br /&gt;
&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9157</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9157"/>
		<updated>2011-04-09T19:46:10Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Generating reputation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we represent reputation?==&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9156</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9156"/>
		<updated>2011-04-09T19:42:33Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
As stated earlier, we need to store an adequate level of information about interactions between entities. This &amp;quot;adequate&amp;quot; level can be quite large in terms of actual storage space. This brings us to the problem of how to maintain reputation history, since in a distributed system this is crucial to the scalability and success of the entire system.&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9155</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9155"/>
		<updated>2011-04-09T19:18:52Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems. Peer-based systems rely on emergent reputation, while policy-based systems rely on imposed rules.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. Interestingly, Android and other mobile application frameworks such as iOS also use an emergent-based reputation system. They provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative opinions - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9154</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9154"/>
		<updated>2011-04-09T19:13:44Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Currently, existing distributed systems do not have an ideal reputation system in place. We will discuss two forms of existing systems. Peer-based and policy-based systems.&lt;br /&gt;
&lt;br /&gt;
Peer-based systems are ones in which end-users provide reputation information about a certain subject. Sites such as eBay and Youtube utilize rating and comment systems. Particularly, eBay uses an interaction-based form of reputation to provide information about buyers and sellers. &lt;br /&gt;
&lt;br /&gt;
Policy-based systems can be found in a variety of application frameworks. Two examples include Java and Android. These systems enforce a developer to state the intentions of the application in what&#039;s known as a policy file. The stated intentions are required as a security measure for access to crucial parts of the system. For mobile devices, if an application needs to acquire the GPS location or read/write contact information this must be stated in the policy file. Otherwise, an application cannot be deployed. Furthermore, items on this policy file are presented to the user and if a user is suspicious about an application needing access to unnecessary utilities, they can choose to not install the application. For example, a &amp;quot;stop-watch&amp;quot; application might appear extremely suspicious to a user if it requested access to contact information and internet access. In addition, Android and other mobile application frameworks such as iOS provide a means to rate and review applications similar to the buyer-seller reputation systems provided with eBay. The mentality is that if an application is untrustworthy or of poor quality, the greater public opinion will merge and polarize to negative ratings - eventually leaving the application as a non-threat to potential buyers. For trustworthy applications, the result would be quite the opposite.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9153</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9153"/>
		<updated>2011-04-09T18:54:35Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can we improve on existing systems? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Systems such as eBay use an interaction-based form of reputation to provide information about buyers and sellers.    Currently, existing distributed systems do not have an ideal reputation system in place. &lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. Buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data fails to protect machines and the individuals behind them, we can fall back on justice systems and provide them with accurate information.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9152</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9152"/>
		<updated>2011-04-09T18:50:55Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What systems are currently in place? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Reputation systems are used in a wide array of projects and applications, from e-commerce sites to the web as a whole. Systems such as eBay use an interaction-based form of reputation to provide information about buyers and sellers.    Currently, existing distributed systems do not have an ideal reputation system in place. &lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
Existing systems provide an adequate level of accurate reputation information for their purpose. For closed and centralized systems such as the example provided, eBay, this level of sophistication is sufficient. Buyers are able to favour certain sellers over others based on feedback and ratings left by previous sellers. However, to make this decision easier, these sites convert the data into a more readable and comparable form, a numerical scale. This abstraction process, however, prevents one from truly understanding the reasons behind the values. In case of eBay, buyers and sellers are able to bid with a fair degree of certainty and trust; if one party is unsatisfied with the transaction, eBay will step in to provide order. This level of justice is not easily attainable in large-distributed systems. Although we can assume we have an adequate level of justice, in order for a reputation system to be plausible in such a large system and for justice systems to work, we need to store sets of event-based histories that can be attributed to each entity that interacts in the system. In the case where reputation data has failed to protect machines and the individuals behind them, we can fall back on justice systems.&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9151</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9151"/>
		<updated>2011-04-09T18:27:52Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* What is the leading trend? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9149</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9149"/>
		<updated>2011-04-09T18:13:16Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about another given entity if it has never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It is even more unreasonable to expect an entity in the system to be able to store all this information.&lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity in sending out a request for reputation information, and have other entities process that request and return a response. There needs to be a way for an entity to handle the likely event that there is no reputation information on another entity. Finally, the entity needs a way to process and interpret the information it receives.&lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are two primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact with each other. In a hierarchical-centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, known as its authority node. Most, if not all, reputation information will go through this node, and as far as their subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers.&lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9148</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9148"/>
		<updated>2011-04-09T18:07:16Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about any other given entity if it&#039;s never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It&#039;s even more unreasonable to expect an entity in the system to be able to store all this information. &lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity within sending out a request for reputation information, have other entities process that that request and send out a response. There needs to be a way for an entity to handle they likely event that there is no data on another entity, And finally the entity needs a way to process the information it receives in return. &lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are 2 primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact. In a hierarchical, centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, its authority node. Most if not all reputation information will go through this node, and as far as his subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers, &lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9146</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9146"/>
		<updated>2011-04-09T18:04:08Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a Reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about any other given entity if it&#039;s never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It&#039;s even more unreasonable to expect an entity in the system to be able to store all this information. &lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a reputation system, querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity within sending out a request for reputation information, have other entities process that that request and send out a response. There needs to be a way for an entity to handle they likely event that there is no data on another entity, And finally the entity needs a way to process the information it receives in return. &lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are 2 primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact. In a hierarchical, centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, its authority node. Most if not all reputation information will go through this node, and as far as his subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a distributed, peer to peer system, reputation information will be acquired from trusted peers, &lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9145</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9145"/>
		<updated>2011-04-09T18:00:09Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying reputation is the problem of how one entity in a Reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about any other given entity if it&#039;s never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It&#039;s even more unreasonable to expect an entity in the system to be able to store all this information. &lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a Reputation system, Querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as Querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for Querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity within sending out a request for reputation information, have other entities process that that request and send out a response. There needs to be a way for an entity to handle they likely event that there is no data on another entity, And finally the entity needs a way to process the information it receives in return. &lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are 2 primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact. In a hierarchical, centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, it&#039;s authority node. Most if not all reputation information will go through this node, and as far as his subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a Distributed, peer to peer system, reputation information will be acquired from trusted peers, &lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9144</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9144"/>
		<updated>2011-04-09T17:58:27Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How is reputation queried? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
Querying Reputation is the problem of how one entity in a Reputation system acquires reputation data on another entity in the system that it does not already have. There will need to be an established way of requesting, receiving and finally analyzing the reputation data to decide if a connection should be made or not.  This needs to be done because depending on the size of the system it&#039;s highly unlikely any given entity will know about any other given entity if it&#039;s never communicated with it before. In a system like the internet it is unreasonable to expect the regular process of information dissemination to provide every entity information on every other entity. It&#039;s even more unreasonable to expect an entity in the system to be able to store all this information. &lt;br /&gt;
&lt;br /&gt;
In the greater scheme of a Reputation system, Querying assumes some systems need to already exist. There needs to be a means of authenticating messages, as to limit the spread of false information and guarantee the integrity of the system. There needs to be a way of maintaining the history of the system, so that reputation events can be recorded and accessed. There needs to be a means of dissemination, as Querying in this sense won&#039;t be suited for the gradual distribution of information. In short, for there to be querying of reputation, you need to have something worth querying. &lt;br /&gt;
&lt;br /&gt;
But what does a system for Querying need to address? It needs to be able to request information on demand, and receive that information quickly and efficiently. Specifically, the system needs to be able to handle any given entity within sending out a request for reputation information, have other entities process that that request and send out a response. There needs to be a way for an entity to handle they likely event that there is no data on another entity, And finally the entity needs a way to process the information it receives in return. &lt;br /&gt;
&lt;br /&gt;
As previously mentioned, in this paper, there are 2 primary layouts for a reputation system: hierarchical and distributed. Both of which will need to interact. In a hierarchical, centralized system, there is a hierarchy of nodes who defer to each other. Any given node in the system will defer to an authority, it&#039;s authority node. Most if not all reputation information will go through this node, and as far as his subordinate nodes are concerned, his &#039;views&#039;, or reputation data, will be absolute. In a Distributed, peer to peer system, reputation information will be acquired from trusted peers, &lt;br /&gt;
&lt;br /&gt;
--more coming&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9138</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9138"/>
		<updated>2011-04-09T00:28:20Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* Generating reputation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
==How is reputation queried?==&lt;br /&gt;
&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9137</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9137"/>
		<updated>2011-04-09T00:25:17Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How do we maintain reputation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
==How do we maintain reputation?==&lt;br /&gt;
&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9136</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9136"/>
		<updated>2011-04-09T00:24:05Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=What is the leading trend?=&lt;br /&gt;
==How can we improve on existing systems?==&lt;br /&gt;
&lt;br /&gt;
=Our assumptions=&lt;br /&gt;
&amp;lt;Here we can talk about how we originally wanted to have a section on PKI, but changed our minds because it was veering too far from our core problem of reputation&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Generating reputation=&lt;br /&gt;
==How do we gather reputation?==&lt;br /&gt;
==Where do we store reputation?==&lt;br /&gt;
===How do we maintain reputation?===&lt;br /&gt;
==How is reputation disseminated?==&lt;br /&gt;
=Making decisions=&lt;br /&gt;
==How do we make decisions based on reputation?==&lt;br /&gt;
&lt;br /&gt;
=Implementation=&lt;br /&gt;
==Can we achieve this through incremental updates?==&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Why PKI should be ommitted&lt;br /&gt;
reputation must be trusted = we get this trust through interactions. We BELEIVE this trust because we assume we have attribution!&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9135</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9135"/>
		<updated>2011-04-09T00:15:48Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
The idea of enforcing rules or generating reputation of other entities to use in a decision-making process are both realistic options. This is known as the Emerge vs. Impose problem. Do we maintain records based on a fixed set of imposed rules? Or do we build rules as the system emerges and reputations are formed. In our opinion, we feel the answer is both.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=Introduction to Reputation Systems=&lt;br /&gt;
=Guaranteeing Authenticity=&lt;br /&gt;
=Dissemination=&lt;br /&gt;
=Maintaining History=&lt;br /&gt;
=Querying Reputation=&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
=External Links=&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9134</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9134"/>
		<updated>2011-04-09T00:10:11Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity. As stated above, the validity of acquired reputation is largely subjective and unknown. Clearly, if we are to achieve an optimal reputation system we will need a fixed set of rules or norms that are expected to be followed in certain situations. If we look back to the analogy with human&#039;s, we are - to a fairly high degree - able to maintain order in some parts of the world by enforcing rules. It is unreasonable to think that we can prevent all wrong-doing. There are always outliers that will oppose the greater society, but eventually the greater community will overcome those outliers and prevent them from being detrimental to society. There is no perfect solution to maintaining social order in reality, and likewise, there is no perfect solution for maintaining good behaviour of computational entities.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=Introduction to Reputation Systems=&lt;br /&gt;
=Guaranteeing Authenticity=&lt;br /&gt;
=Dissemination=&lt;br /&gt;
=Maintaining History=&lt;br /&gt;
=Querying Reputation=&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
=External Links=&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9133</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9133"/>
		<updated>2011-04-08T23:59:08Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
In a more technical and distributed view, reputation is the process of recording, aggregating, and distributing information about an entity&#039;s behaviour in distributed applications. Reputation might be based on the entity&#039;s past ability to adhere to a mutual contract with another entity.&lt;br /&gt;
&lt;br /&gt;
=What systems are currently in place?=&lt;br /&gt;
&lt;br /&gt;
Distributed systems such as the web, do not have a ideal reputation system in place.&lt;br /&gt;
&lt;br /&gt;
=Introduction to Reputation Systems=&lt;br /&gt;
=Guaranteeing Authenticity=&lt;br /&gt;
=Dissemination=&lt;br /&gt;
=Maintaining History=&lt;br /&gt;
=Querying Reputation=&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
=External Links=&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9132</id>
		<title>Distributed OS: Winter 2011 Reputation Systems Paper</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011_Reputation_Systems_Paper&amp;diff=9132"/>
		<updated>2011-04-08T23:53:15Z</updated>

		<summary type="html">&lt;p&gt;Mdpless2: /* How can reputation be used? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=What is reputation?=&lt;br /&gt;
&lt;br /&gt;
In the real world, people are generally quite conscious of certain behavioural actions that make. These actions are expected to fall within the social norms and are scrutinized continuously by the people around us. On a daily basis, Individuals build a personal set of judgment values and opinions on others in the society. When we listen to a politician on the news, or interact with a friends, we are updating this image that we have of the individual or group. It is this image we generate that helps us make conclusions as to whether we like the individual, whether we trust the individual, or whether we can relate to the individual. The global opinions that others have on us is known as reputation.&lt;br /&gt;
&lt;br /&gt;
A reputation system&#039;s main purpose is to facilitate in providing a means for assumptions to be made about the level of trust one can have for a particular person or situation in executing a task to our liking. It is important to note the importance of the word assumption. With the gathered information, we are able to generate an estimate of their actions. It is by no means accurate. Furthermore, reputation is not a globally accepted view of an entity. In some cases, an individuals reputation can be quite varied between different observers. Some may have encountered contact with the entity in a different context or had a different level of expectation compared to others. Likewise, some individuals might be falsely persuaded to confirm to specific opinions by large and powerful groups, whereas others have a crystallized and hard-to-change opinion.&lt;br /&gt;
&lt;br /&gt;
=How can reputation be used?=&lt;br /&gt;
&lt;br /&gt;
Reputation can be useful in acquiring an understanding of how congruent one&#039;s own goals are from another. If we are to accomplish a desired task that requires the cooperation of others, we carefully analyze whether the individuals we choose will be a good fit or whether they will hinder our progress. Or, worse yet, halt our progress completely.&lt;br /&gt;
&lt;br /&gt;
=Introduction to Reputation Systems=&lt;br /&gt;
=Guaranteeing Authenticity=&lt;br /&gt;
=Dissemination=&lt;br /&gt;
=Maintaining History=&lt;br /&gt;
=Querying Reputation=&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
=References=&lt;br /&gt;
=External Links=&lt;/div&gt;</summary>
		<author><name>Mdpless2</name></author>
	</entry>
</feed>