Printer Friendly

Managing trust in peer-to-peer networks.

ABSTRACT. The notion of trust is fundamental in open networks for enabling peers to share resources and services. Since trust is related to subjective observers (peers), it is necessary to consider the fuzzy nature of trust in peers for representing, estimating and updating trustworthiness in peer-to-peer networks. In this paper, we present a fuzzy theory-based trust model for trust evaluation, recommendation and reasoning in peer-to-peer networks. Fuzzy theory is an appropriate formal method for handling fuzziness that happening all the time in trust between peers for practical purposes, and fuzzy logic provides an useful and flexible tool by utilizing fuzzy IF-THEN rules to model knowledge, experiences and criteria of trust reasoning that people usesin everyday.

Categories and Subject Descriptors

C.2.1 [Network Architecture and Design]:

General Terms

Nework Architecture, Network Design

Keywords: Peer-to-peer network, Fuzzy theory

1. Introduction

Peer-to-Peer (P2P) systems are currently receiving considerable interest in the world of Internet technologies. Some popular P2P communities include file sharing systems (Gnutella, etc.), person-to-person online auction sites (eBay, etc.), and many business-to-business (13213) services (supply-chain-management networks, etc). Peer-to-Peer networks are networks in which all peers cooperate with each other to perform a critical function in a decentralized manner [1]. Usually there is no centralized control authority or hierarchical organization in P2P networks, and all peers are both consumers and providers of resources and share information and services directly without intermediary peers. Compared with centralized systems, P2P networks provide an easy way to aggregate large amounts of resources residing on the edge of Internet or in ad-hoc networks with a low cost of system maintenance [2]. In P2P networks, peers are heterogeneous in providing resource/ service and do not have the same competence and reliability, and it is necessary to estimate whether a peer is sufficient trustworthy for sharing resources and services. In P2P networks, a peer might be someone we already know, but it might also be someone who is totally unknown and with whom we have never interacted before. Therefore, how to establish and manage trust in independent peers is becoming a fundamental problem. This paper presents a trust management mechanism that allows peers to represent, estimate and update trustworthiness of other peers for choosing proper partners to interact with.

The rest of this paper is organized as follows: section 2 discusses the definition of trust and its fuzzy nature. Section 3 briefly outlines the framework and functionality of our trust model. Section 4, 5, 6 introduces our approach to developing a fuzzy theory-based trust model. The simulation experiment design and results are presented in section 7. Finally, we offer some concluding remarks and directions for future work in section 8.

2. Trust in P2P networks

The notion of trust has been discussed in various open systems [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. However, there is no universal agreement on the definition of trust. In this paper, we adopt the following definition [12]:

Trust is a particular level of subjective estimation on particular characteristics or actions of other peers.

In P2P networks, a peer can be regarded as the combination of human (subjective component) and other objects (rational components, such as host, software, etc.). Trust in peers is related to subjective observers (peers), will dynamically change according to peers' experiences, and cannot be properly analyzed and processed by precise methods.

Trust in entities (peers) is based on faith because the subjective agents are difficult to control [13]. The fact that an entity A trusts another entity B in some respect informally means that A believes that B will behave in a certain way--perform (or not) some action (or actions) in certain specific circumstances [14]. As a complicated concept connected with human or subjective peers, trust has some remarkable characteristics: subjectivity, uncertainty and fuzziness. First, trust in nature is the subjective estimation which in a large extent depends on observers. On the other hand, the condition of ignorance or uncertainty about other peers' characteristics and behaviors is important in the notion of trust. It is related to the limits of our capability ever to achieve a full knowledge of others, includes their motives, their responses to changes, etc. Trust is a tentative and intrinsically fragile response to our ignorance, a way of coping with 'the limits of our foresight' [12]. Furthermore, since fuzzy nature caused by its subjectivity and uncertainty is usually central to human knowledge, trust as a sort of complicated human knowledge is fuzzy too. For example, people may not simply answer yes or no on the question whether to trust a peer, but think how much degree he/ she should trust the peer. In many circumstances, people can't even make sure whether he/she should "trust" or "partly trust" a peer. The situation leads us to consider trust as a fuzzy concept and apply fuzzy theory in trust management to present a formal computational model of trust.

Although several approaches [2, 3, 4, 5, 6, 7, 8, 9, 10, 11] have been proposed to model trust formally, these models are all based on precise methods or logics and therefore some important issues are still open questions. First, the fuzzy nature of trust can't be neglected in P2P systems. And a proper formal trust management model should be intuitive, comprehensible and usable for users, except suitable for automatic processing.

3. Trust model

In our trust model, a P2P network consists of peers. Peers have computational power for computing trustworthiness of other peers and performing necessary encryption, decryption, signing and verification of messages for protecting sensitive trust-related information.

As similar as in [2], we use P2P file sharing application as an example to demonstrate our trust model, however the method is general and can be easily applied to other P2P applications, such as online auctions, e-commerce, or P2P distributed computing, etc


In P2P file sharing applications, file providers are heterogeneous, some file providers will truthfully provide files with fast download speed and good file quality as they declared, but others may be malicious to offer bad service and therefore wasting other peers' time and resources. In order to distinguish benevolent file providers from malicious ones, we utilize trust management mechanism as shown in Figure 1. In this paper, when a peer provides file sharing services we call it file provider; otherwise, we call it simply peer. In P2P networks, a file request will start a search which usually results in a list of providers for requested file. Then a peer chooses file provider with the highest trust in the list. If the file provider is trustworthy according to the peer's previous experiences, the peer will interact with the file provider (download files). If the file provider is not trustworthy, the peer will select another file provider. If the peer did not interact with a file provider before and lack corresponding trustworthiness information, the peer has to ask other peers (we call these peers "recommenders") to make recommendations for it. Before taking recommendations from recommenders, the peer should consider the trustworthy or reliability of recommender, and make trust deduction based on all trust-related information. Furthermore, if there is more than one recommender provide recommendations on the same file provider, the peer needs to combine several trusts into a single one. After each interaction, the peer will evaluate the interaction and update or revise its trust in the file provider. If the decision of interaction is based on other peers' recommendations, the peer also needs to update its trust in each of peers that give recommendations. If the recommender's recommendation is consistent with the peer's evaluation on the interaction, the peer will increase its trust in the recommender; otherwise, it will decrease its trust.

Since heterogeneous peers have different preferences and will judge issues according to their own criteria, it is import that peers develop trust in a decentralized way and each peer should maintain its own trust information DB.

In file sharing P2P applications, a peer needs to manage two kinds of trust in another peer, one is direct trust in file provider's capability for providing file sharing service, and the other is recommendation trust in recommender's reliability for providing recommendations about other file providers. And there are three different type of trust reasoning methods:

* Trust revision,

* Trust deduction, and

* Trust combination

In this paper, we only consider "pure" P2P system in which all peers are equal and without central authority. But in centralized P2P architecture, our trust model can be even more efficient since all peers always trust super peer as recommender and need not re-evaluation the trust about them.

4. Trust vector

Fuzzy theory introduced by Zadeh is a superset of conventional (Boolean) set and the appropriate formal method for dealing with fuzziness that happening all the time in trust between subjective peers in P2P networks for practical purposes.

Definition 1: Let U be a nonempty set. A fuzzy subset (or call it fuzzy set) A in U is characterized by its membership

[[mu].sub.A](X)U [right arrow] [0, 1],

where [[mu].sub.A] (x) is interpreted as the degree of membership of element x in fuzzy set A for each x [member of] U. Fuzzy set A is a set of ordered pairs which consists of element x and its degree of membership, and A is

expressed as {[[mu].sub.A](x)/x| x[member of] U}.

In [15] L.A. Zadeh introduced the concept of linguistic variable that is very useful when issues discussed are too complex for analysis by conventional quantitative techniques or when the available sources of information are interpreted qualitatively, inexactly, or uncertainly. A linguistic variable is characterized by a quintuple (x, S(x), [U.sub.x], [G.sub.x], [M.sub.x) in which x is the name of variable; S(x) is the term set of x , that is, the set of linguistic terms of x with each term being a fuzzy set defined on [U.sub.x]: [U.sub.x] is the universe of discourse of x,; [G.sub.x] is the syntactic rule for generating the linguistic terms of x; and [M.sub.x] is the semantic rule for associating value of linguistic terms with its meaning.

In this paper, we define linguistic variable Trust or T as shown in Figure 2.


Here [U.sub.T] = {1, 2, 3) denotes the set of different grades of trust, the term set S(T) = {distrust, partly trust, trust), [G.sub.T] is syntactic rule which connects qualifier "dis", "partly", with "trust", and [M.sub.T] is the definition of membership functions of linguistic terms in S(T).

As fuzzification interface of our trust model, linguistic variable Trust performs a measure and scale mapping that transforms from an observed input space (P2P networks) to fuzzy sets defined on discrete set [U.sub.T], and converts trustworthiness into suitable linguistic terms. For [M.sub.T], all linguistic terms are defined as fuzzy singleton which is a precise value and no fuzziness is introduced by fuzzification since it is easy to implement. Therefore, the linguistic terms of Trust can be represented as vectors:

[T.sub.1] = [M.sub.T]("distrust") = (1, 0, 0,) [T.sub.2] = [M.sub.T] ("partly trust") = (0, 1, 0) [T.sub.3] = [M.sub.T] ("trust") = (0, 0, 1)

Since trust in peers is subjective and fuzzy, it can't be simply characterized as trust or distrust, and the degree of membership of peers belongs to linguistic terms (sets) of Trust takes more than two values {0, 1). Therefore, the degree of trust is defined as a membership function in [U.sub.T] and represented as a vector, we call it trust vector:

V = {[v.sub.1], [v.sub.2], [v.sub.3]}

where [v.sub.k] (k = 1, 2, 3) denotes the degree of membership of a peer

5. Trust class and trust evaluation

5.1 Trust class

In [14], it is pointed out that there is no need to trust an entity completely if one expects it only to perform a limited task. Trust class is the definition on composing and context of trust. In file sharing P2P applications, file providers' capabilities are heterogeneous. For example, some file providers can offer high-speed file download whereas others can only provide slow services, and they may share different types of files (e.g. movie, music, image, etc.) with different quality. At the same time, peers themselves may have different preferences and judge criteria in various situations. Some peers may favorite high-quality movies and care nothing about download speed, while others may be impatient at waiting long for downloading files. If two peers A and B have different evaluation criteria, A cannot trust B's recommendation even when B is truthful. Only if two peers are similar in their evaluation criteria, peer A can consider whether to trust peer 8's recommendations. Although the file provider's capability can be presented in various aspects, only file type, download speed and file quality are considered in this paper as an example. However the definition mechanism of trust class given in this paper is general, and can be applied in more complex situation, such as considering the behavior of a peer, or wrong content distribution, etc.

Since trust class in nature can be regarded as the aggregation of many relevant attributes, we utilize tree structure to define trust class in this paper. The direct trust is about capability of file providers, and its trust class is defined as follows.


A trust class consists of a root node and several attribute nodes. Root node is a triple (class name, type, threshold), and threshold t is used to determine whether two trust class are similar. For direct trust, class name is FileDownload, and type is FileType which is a set of file types, such as movie, music, etc. [w.sub.i] is the weight of attribute node and satisfies [Ow.sub.i] = 1. Attribute nodes are sub-elements of trust class, and denoted as 2-tuple (attribute name, linguistic variable). In Figure 3, only two attributes Download Speed and File Quality are defined, and their linguistic variable [F.sub.DS] and [F.sub.FQ]. is defined by peer. Figure 4 give the example which demonstrates how a peer utilizes linguistic variables to represent its preferences and judge criteria on download speed and file quality. In general, the larger a file is, the better its quality is. Therefore we simply use the size of a file to represent its quality in Figure 4. And the peer uses linguistic term "slow", "medium" and "fast" etc. to denote its criteria on download speed and file quality, and defines the mapping relationship between linguistic terms and actual download speed and file quality (numeric value).


Since recommendation trust is based on consistency between the recommendation and the peer's evaluation of the interaction, the trust class of recommendation trust about reliability of recommender is defined as follows,


Here [w.sub.1], and [w.sub.2] are weights of speed consistency and quality consistency, and satisfy [w.sub.1] + [w.sub.2] = 1. Linguistic variables Cps, CFO are defined by each peer too.


Figure 6 and are examples demonstrated the consistency between peer's own experience and other peer's recommmendation.

If the download speed provided by the recommender is rs, and the actual speed is s according to the peer's own experience, then "speed difference" = [absolute value of rs - s]. If the file quality provided by the recommender is rq, and the actual quality is q, then "quality difference" = [absolute value of rg - q]

5.2 Trust evaluation

After each interaction, peers need evaluate the capability of file provider or the reliability of recommender. Since trust class defines peer's evaluation criteria and combination manner, it can be directly applied in this evaluation process. The evaluation process is called trust evaluation, which is based on fuzzy mapping

Definition 2: Let A and B be fuzzy set in X and Y respectively, and R be fuzzy set in X x Y, fuzzy mapping is the exclusive mapping 'o' as follows

R:X [right arrow] Y,[A.sub.a] B = A o R.

The trust evaluation involves four components:

(1) Factor set F = {[f.sub.1], [f.sub.2], ..., [f.sub.n]};

(2) Evaluation set E = {[e.sub.1], [e.sub.2], ..., [e.sub.m]};

(3) Factor evaluation matrix R = [([r.sub.ij]).sub.nxm],

(4) Weight of factors W = {[w.sub.l], [w.sub.2], ..., [w.sub.n]}.

F contains all attributes of a trust class. E is the evaluation (linguistic term) set defined in corresponding linguistic variables, such as "slow", "medium" and "fast" defined in [F.sub.DS]. Fuzzy relationship R is a fuzzy mapping from attributes to evaluations and denotes the degree of various evaluations that the peer gives on different factors, e.g., [r.sub.ij] denotes the degree of evaluation [e.sub.j] that the peer gives on factor [f.sub.i]. As long as linguistic variable of attribute is clearly defined, [r.sub.ij] can be directly obtained. W is the importance of attributes in F, i.e., the weights of attribute nodes defined in trust class.

As a practical way to obtain trust vector from interaction, trust evaluation is a fuzzy mapping as follows:

V = ([v.sub.1], [v.sub.2], ..., [v.sub.m]) = ([w.sub.1], [w.sub.2], ..., [W.sub.n]) o [([r.sub.ij]).sub.nxm]

Here the symbol "o" is a fuzzy mapping. In general, fuzzy mapping "o" is implemented by Zadeh operators U and U (maximum and minimum operation). Although Zadeh operators are simple for implementation, their flaws are obvious too: only the maximal value and minimal value are considered and too much information is dropped. In this paper, we choose ([direct sum]], *) to implement fuzzy mapping, a [direct sum] b = min{1, a+b] and "*" is multiplication. Then the trust vector V based on given interaction is computed by


Here we give an example to demonstrate the process of trust evaluation. Peer A defines its evaluation criteria as the trust classes as shown in Figure 3 and Figure 4, and a file provider B offers image file sharing service (FileType = {image}). After the interaction (download images), A obtains average download speed is 70KB/s and average file quality is 600KB. According to Figure 4, the factor evaluation matrix R is


If the weights of download speed and file quality are [w.sub.1],=0.4 and [W.sub.2]=0.6, then the evaluation on file provider B is


[V.sub.B] indicates that peer A is inclined to partly trust 8's capability in providing image files.

And if the recommendation provided by a recommender C is that B's download speed is 80KB/s and average file quality is 750KB, then

"speed difference" = [absolute value of rs - s] = [absolute value of 80 - 70] = 10 KB/s "quality difference" = [absolute value of rq - q] = [absolute value of 750 - 600] = 150 KB

If the weights of speed consistency and quality consistency are [W.sub.1]=0.4 and [W.sub.2]=0.6, the evaluation on recommender C is


[V.sub.c] indicates that peer A is inclined to trust C's recommendation. After trust evaluation, evaluated trust vector can be used to update the peer's trust in file providers or recommenders.

6. Trust reasoning

In P2P networks, an applicable trust model should provide a formal trust reasoning mechanism to manage dynamic revision according to new experiences, propagation caused by recommendation and combination of several trust in peers. Since fuzzy logic is much closer in spirit to human thinking and natural language than conventional logical systems, and it basically provides an effective means of capturing the approximate, inexact nature of the real world, we apply fuzzy logic in trust management and present a trust reasoning mechanism in this section.

6.1 Trust revision

Since trust in peers is dynamically changing from time to time, it is necessary to update (or revise) trust according to new experiences. If peer A has two trust [V.sub.n] and [V.sub.o] about the same file provider or recommender B, [V.sub.n] is the trust evaluated from A's current experience, and [V.sub.o] is the former trust in B, then trust revision implements dynamic update of trust, i.e. derive a new trust vector V2 from [V.sub.n] and [V.sub.o].

In trust revision, peer A compares current experience [V.sub.n] and past experience [V.sub.o], then derives new trust V2 according to its own revision criteria. By considering the common knowledge and experiences of trust reasoning that human uses in everyday, we present 9 fuzzy IF-THEN rules as the general example to demonstrate the reasoning process of trust revision.

[R.sub.1]: IF [V.sub.0] is "trust"([T.sub.3]) AND [V.sub.n] is "trust"([T.sub.3]) THEN V is "trust"([T.sub.3]),

also [R.sub.2]: IF [V.sub.0] is "trust"([T.sub.3]) AND [V.sub.n] is "partly trust"([T.sub.2]) THEN V is "partly trust"([T.sub.2]),

also [R.sub.3]: IF [V.sub.o] is "trust"([T.sub.3]) AND [V.sub.n] is "distrust"([T.sub.1]) THEN V is "partly trust"([T.sub.2]),

also [R.sub.4]: IF [V.sub.0] is "partly trust"([T.sub.2]) AND [V.sub.n] is "trust"([T.sub.3]) THEN V is "trust"([T.sub.3]),

also [R.sub.5]: IF [V.sub.0] is "partly trust"([T.sub.2]) AND [V.sub.n] is "partly trust"([T.sub.2]) THEN V is "partly trust"([T.sub.2]),

also: [R.sub.6] IF [V.sub.0] is "partly trust"([T.sub.2]) AND [V.sub.n] is "distrust"([T.sub.1]) THEN [V.sub.2] is "distrust"([T.sub.1]),

also[R.sub.7]: IF [V.sub.0] is "distrust"([T.sub.1]) AND [V.sub.n] is "trust"([T.sub.3]) THEN V is "partly trust"([T.sub.2]),

also [R.sub.8]: IF [V.sub.0] is "distrust"([T.sub.1]) AND [V.sub.n] is "partly trust"([T.sub.2]) THEN V is "distrust"([T.sub.1]),

also[R.sub.9]: IF [V.sub.0] is "distrust"([T.sub.1]) AND [V.sub.N] is "distrust"([T.sub.1]) THEN [V.sub.2]is "distrust"([T.sub.1]), (1)

Here, [V.sub.n], [V.sub.o] and V are linguistic variables in [U.sub.T]. The logic structure of rule set (1) can facilitate the understanding and analysis of trust reasoning in a semi-qualitative manner, close to the way human reason about the real world. And rule set (1) can be expressed in natural language as "if A trusts B before, and current experience still support A trusts B, then A trusts B", etc. It is apparently helpful for peers (users) to understand the formal trust reasoning mechanism and define their own trust revision rule. Note that the rule set of trust revision we give here is a general example, and peers can define other rules according to their own criteria.

The number of rules in (1) is decided by the grades of trust. If trust has n grades then rule set of trust revision will include [n.sup.2] IF-THEN rules. Therefore we have to find the tradeoff between complexity and precision of trust model according to actual environments. And we can also utilize negative logic operator "not" to reduce the number of IF-THEN rules.

Fuzzy reasoning has different implementation, in this paper we choose Zadeh's Max-min rule [R.sub.m] to implement fuzzy implication since it has a well-defined logic structure [16], product operator to implement sentence connective "AND", intersection as the combination operator of IF-THEN rule set connected by sentence connective "also", and sup-product as the compositional operation. Then the fuzzy implication relationship of IF-THEN rule set (1) is given by

[R.sub.i] = [T.sub.j(i)] x [T.sub.k(l)] x [T.sub.l(i)]


Here 1 [less than or equal to] i [less than or equal to] 9, 1 [less than or equal to] j(i), k(i), l(i) [less than or equal to] 3 are subscripts of corresponding linguistic terms of Trust in rule R,, e.g., when i = 1, j(i) = k(i) = 1(i) = 3, and [R.sub.1], = [T.sub.3] x [T.sub.3] x [T.sub.3]. [y.sub.1] [y.sub.2], z [member of] [U.sub.T], denotes fuzzy implication [R.sub.m]. The fuzzy relationship R is given by the intersection of 9 individual rules [R.suib.i]


With trust vector [V.sub.n] and [V.sub.o] as the premise of GMP [16], the new trust V is given by


By calculating [[mu].sub.V],(z) for each z [member of] [U.sub.T], we obtain the new trust V'.

6.2 Trust deduction

If peer A trusts (in [V.sub.1]) recommendation of B and B recommends that the trustworthiness of file provider C is [V.sub.2], A should not simply trust C exactly as B's recommendation but estimate the acceptance on 8's recommendations according to the trustworthiness or reliability of B, then decides how to accept [V.sub.2] provided by B. Therefore, trust deduction involves the linguistic variable Accept (see Figure 7).


Here the universe of discourse [U.sub.A]= [0, 1] is a continuous interval, denotes the acceptance on trust vectors provided by recommender 8, e.g., '0' denotes deny, and '1' denotes completely accept. It is necessary to discretize [U.sub.A] to form a discrete universe. Discretization of a universe of discourse is frequently referred to as quantization [16]. In this paper, [U.sub.A] is discretized into [U.sub.A]' = {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]. And all linguistic terms of Accept is represented as vectors:

[A.sub.1]= [M.sub.A] ("un-accept") = (1,0.75,0.5,0.25,0,0,0,0,0,0,0), [A.sub.2]= [M.sub.A] ("partly accept") = (0,0.25,0.5,0.75,1,1,1,0.75,0.5, 0.25,0), [A.sub.3]= [M.sub.A] ("accept") = (0,0,0,0,0,0,0,0.25,0.5,0.75,1),

At the same time, the acceptance on recommendation provided by 8 is characterized as the membership function in [U.sub.A] and represented as acceptance vector in [U.sub.A]'

u = {[u.sub.0], [u.sub.1], ..., [u.sub.10]}.

Since the acceptance on B's recommendations only depends on B's trustworthiness or reliability, acceptance vector u is directly derived from recommendation trust [V.sub.1]. We present three IF-THEN rules as the general example of reasoning mechanism of trust deduction,

[R.sub.1]: IF [V.sub.1] is "distrust"([T.sub.1]) THEN u is "un-accept" ([A.sub.1]), also [R.sub.2]: IF [V.sub.1] is "partly trust"([T.sub.2]) THEN U is "partly accept"([A.sub.2]), also [R.sub.3]: IF [V.sub.1] is "trust"([T.sub.3]) THEN u is "accept" ([A.sub.3]). (2)

Here, [V.sub.1] and u are linguistic variables in [U.sub.T] and [U.sub.A], respectively. The fuzzy implication relationship of rule set (2) is given by [R.sub.i] = [T.sub.i] x [A.sub.i].


where x [member of] [U.sub.T] and y [member of] [U.sub.A]'. The fuzzy relationship R is given by the intersection of three individual rules [R.sub.i].


With the trust [V.sub.1] of recommender B as the premise of GMP, the acceptance vector u on B is given by


By calculating [[mu].sub.[omega]],(y) for each y [member of] [U.sub.A]', we obtain the acceptance Vector [mu] which is the acceptance on trust vector [V.sub.2] provided by B. We define the reasoning process of trust deduction discussed above as function T-A( ):

u = T-A([V.sub.1])

Since u is a fuzzy set, it is necessary to defuzzify u into a crisp value for operating on [V.sub.2]. In this paper, the Mean of Maximum Method (MOM) [16] is used as the defuzzification method.

Definition 3: Let u = ([u.sub.0], ..., [u.sub.10]) be acceptance vector in [U.sub.A]' , the defuzzification function Df() of u is given by

d = Df(u) = ([Om.sub.j]) / l

where [m.sub.j] in [U.sub.A]' is the support value at which u reaches the maximum value, and I is the number of such support values. Therefore, the new trust V' that A has in B is given by

V' = Df(T-A([V.sub.1])) [V.sub.2] = d [V.sub.2]

6.3 Trust combination

If peer A trusts recommender B and D in degree of [rV.sub.1] and [rV.sub.2] respectively and B and D recommend that the trustworthiness of file provider C is[dV.sub.1], [dV.sub.2] respectively, then [V.sub.1] = Df(T-A([rV.sub.1])) [dV.sub.1], [V.sub.2] = Df(T-A([rV.sub.2])) [dV.sub.2]are trust that A has in C. Since several trust in the same file provider is not suitable for trust-related decision-making, trust combination should combine [V.sub.1] and [V.sub.2]into a single new trust V'.

Since the importance of [V.sub.1] and [V.sub.2] depends on the reliability of recommenders ([rV.sub.1] and [rV.sub.2]), trust combination compares [rV.sub.1] and [rV.sub.2], then decides how to combine [V.sub.1] and [V.sub.2] according to their importance. Therefore trust combination involves another linguistic variable: Importance (see Figure 8).


And the reasoning of trust combination can be represented as rules as follows:

[R.sub.1]: IF [rV.sub.1] is "distrust" ([T.sub.1]) AND [rV.sub.2] is "distrust" [T.sub.1]) THEN W is "equally important" ([I.sub.3]); ... ... ... also. [R.sub.9]: IF [rV.sub.1], is "trust" ([T.sub.1]) AND [rV.sub.2] is "trust" ([T.sub.2]) THEN W is "equally important" ([I.sub.3]).

Here W is the importance of trust [V.sub.1] recommended by B. The reasoning process of trust combination is similar to trust revision and trust deduction. By utilizing fuzzy implication inference method, we can obtain W. We define trust combination functions T-I():

W= T-I([rV.sub.1], [rV.sub.2])

And MOM function w = Df(W) is used to defuzzify fuzzy set W into corresponding crisp value w which is the scalar weight (combination coefficient) of [V.sub.1].

Since [rV.sub.1] and [rV.sub.2] are symmetrically compared in rule set (3), the combination coefficient of [V.sub.2] is (1-w). Therefore, the new combined trust V' is given by

V' = w [V.sub.1] + (1-w) [V.sub.2], w = Df(T-I([rV.sub.1], [rV.sub.2])).

7. Experiments

In order to evaluate our trust model, we developed a simulation of a file sharing system in a P2P network. Our simulation system is based on NeuroGrid [17, 18] which is efficient for P2P network simulation.

7.1 Experimental Setup

At the beginning of simulation, every peer only knows several neighbor peers (file providers or recommenders) directly connected with it. The initial (direct or recommend) trust between neighbor peers is (0 1 0) i.e. peers "partly trust" their neighbors. In each querying cycle, a peer requests a file from other peers. Before downloading the file, the peer will estimate the trustworthiness (capability) of file providers, and may require its neighbors to provide recommendations, and then choose a file provider to interact with. After interaction, the peer needs to evaluate the capability of chosen file provider and the reliability of recommenders, and updates trust in the file provider and recommenders.

Every peer has two "intrinsic" trust vectors (direct trust and recommend trust) which indicate the actual file providing capability and recommendation reliability respectively, and they determine the behavior mode of peer, i.e. how (benevolent or malicious) a peer provides file or recommendations. Other peers can't directly access "intrinsic" trust vectors of a peer, but only estimate them according to the quality of file services or reliability of recommendations provided by the peer during interactions.

Our experiments involve 100 peers, and malicious peers provide both bad file downloading service and false recommendation. We run 5 sets of simulations with different amounts of malicious peers (from 10 to 50) for testing the robustness of our trust model. During interactions, peers will exchange, evaluate and reason trust with other peers based on our trust model. The total number of interactions is 10000. We run each configuration for 10 times and use means for the evaluation criteria.

7.2 Results

The goal of the simulation experiment is to see if our fuzzy theory-based trust model helps peers to select trustworthy file providers, or identify malicious peers and quarantine them from compromising other peers. Therefore we measure the system performance in terms of percentage of successful interactions during an interval in our simulation. A successful interaction is defined as the interaction that a peer correctly chooses a benevolent file provider (not a malicious one) to interact with, and the peer will satisfy with the interaction. For easy to process, we choose fixed interval with 100 interactions in analyzing of simulation results.

If without the support of trust management system, a peer has to choose file providers randomly and the percentage of successful interactions will be proportional to the rate of benevolent peers. But in our simulation it's only true at the beginning when peers don't have enough information. Fig. 9 shows, after the initial phase (less than 600 interactions) of trust management system, peers acquire more and more trust information about other peers and choose file providers more and more accurately, the percentage of successful interactions is continually rising, and at the same time malicious peers are gradually identified and quarantined. After 6300 interactions, the percentages of successful interactions are more than 90% under all configurations (even in the simulation with half malicious peers), which indicates that our trust model is effective and robust in P2P networks.


8. Conclusion

In P2P networks, it is important to decide whether a peer is sufficiently trustworthy for sharing resources and services. As a cognitive phenomenon related to subjective observers (peers), trust in nature is fuzzy, and cannot be properly analyzed and processed by precise methods. To estimate trustworthiness of peers in P2P networks, it is necessary to model trust as a fuzzy concept. In this paper, we utilize fuzzy theory to model trust, trust class & evaluation, and trust reasoning in trust management and present a flexible method to manage trust for various P2P network applications. In order to evaluate our approach, we developed a simulation of a file sharing system in a P2P network. Our experiments show that our trust management system is effective and robust for supporting choosing trustworthy peers to interact with in P2P file sharing application.

Future works include running more simulations to test more aspects of our trust model, trying to test more features that influence the system, such as benevolent peers' collusion and clustered, and utilizing user profile which is defined according to peer's experiences to identify different types of peers for minimizing trust recommendation, etc. Applying this approach to P2P systems for distributed services is particularly promising.

Received 30 Oct. 2004; Reviewed and accepted 30 Jan. 2005

9. References

[1] Milojicic, D.S., Kalogeraki, V. & Lukose, R (Ed.). (2002). Peer-to-Peer Computing. Tech Report: HPL-2002-57, MD: HP Laboratory.

[2] Wang, Y., Vassileva, J. (2003). Trust and Reputation Model in Peer-to-Peer Networks. Proceedings of the Third International Conference on Peer-to-Peer Computing. P2P'03. 150-157.

[3] Abdul-Rahman, A., Hailes, S. (2000). Supporting Trust in Virtual Communities. Proceedings of the Hawaii International Conference on System Sciences 33. vol. 6. 6007-6017.

[4] Richardson, M., Agrawal R., & Domingos, P. (2003). Trust Management for the Semantic Web. Proceedings of the 2nd International Semantic Web Conference. SWC 2003. 351-368.

[5] Mui, L., Mohtashemi, M., & Halberstadt, A. (2002). A Computational Model of Trust and Reputation. Proceedings of 35th Hawaii International Conference on System Science. vol. 7. 188-197.

[6] Beth, T., Borcherding, M., Klein, B, (1994) Valuation of trust in open networks. Proceedings of the European Symposium on Research in Security. ESORICS 1994. 3-18.

[7] Josang, A. (2001). A logic for uncertain probabilities. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 9(3), 279-311.

[8] Kamvar, S.D., Schlosser, M.T., Garcia-Molina, H. (2003). The EigenTrust algorithm for reputation management in P2P networks. Proceedings of International conference on World Wide Web. WWW 2003: 640-651.

[9] Cornelli, F., Damiani, E., De Capitani, S., di Vimercati, D.C., Paraboschi, S., Samarati, P. (2002). Choosing Reputable Servents in a P2P Network, Proceedings of International conference on World Wide Web. WWW 2002: 376-386.

[10] Damiani, E., di Vimercati, D.C., Paraboschi, S., Samarati, P., Violante, F. (2002) A reputation-based approach for choosing reliable resources in peer-to-peer networks. Proceedings of the 9th ACM conference on Computer and communications security. ACM CCS 2002.207-216.

[11] Aberer, K., Despotovic, Z. (2001). Managing trust in a peer-to-peer information system. Proceedings of the ACM Conference on Information and Knowledge Management. CIKM 2001 . 310-317.

[12] Gambetta, D. (2000). Can We Trust Trust", In: Trust: Making and Breaking Cooperative Relations (pp. 213-237). England: electronic edition, Department of Sociology, University of Oxford.

[13] Josang, A. (1997). Prospective for modeling trust in information security. Proceedings of the Information Security and Privacy, Second Australasian Conference. SPA 1997. 2-13.

[14] Yahalom, R., Klein, B., & Beth, Th. (1993). Trust relationships in secure systems--a distributed authentication perspective. Proceedings of the 1993 IEEE symposium On Research in Security and Privacy. IEEE SRSP 1993.150-164.

[15] Zadeh, L.A. (1975). The Concept of a Linguistic Variable and Its Application to Approximate Reasoning. Parts I, II, III. Information Sciences, 8(1975), 199-251; 8(1975), 301- 357, 9(1975), 43-80.

[16] Lee, C.C. (1990) Fuzzy Logic in Control Systems: Fuzzy logic Controller-Part 1,11. IEEE Transaction on System, Man, and Cybernetics. 20(2). 404-435.

Wen Tang, Yun-Xiao Ma, Zhong Chen

School of Electronics Engineering and Computer Science, Peking University

100871, Beijing, P.R. China

{tangwen, mayx,
COPYRIGHT 2005 Digital Information Research Foundation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Tang, Wen; Ma, Yun-Xiao; Chen, Zhong
Publication:Journal of Digital Information Management
Date:Jun 1, 2005
Previous Article:Distributed data management.
Next Article:A peer-to-peer workflow model for distributing large-scale workflow data onto grid/P2P.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters