Printer Friendly

An autonomous accelerated coordination of stochastic moving multi-agents under variations and resource utilization.

1 INTRODUCTION

This paper discovers a new autonomous multi-agent behavior, which is different from our intuition, with time-delay and moving speed. Each agent stochastically moves on the cells along finite resources arranged over a straight line according to a transition probability depending on other agents in synchronization. The moving of each agent is restricted, and it depends on the number of agents over destination cells within a specific ranged window, so that there is coordination among agents. This paper describes two aspects, a stability and a resource allocation, of multi-agents independently. The first analysis and experiments show that the variations with respect to the number of agents on cells become lower when all the agents have an appropriate average moving speed, and the most stable moving multi-agents are demonstrated. According to the principle of minimum entropy, the moving speed for agents is accelerated autonomously. The second result is a resource allocation. It is ideal that every cells are always occupied by agents, because of the efficient use of resources, and it is desirable to be the fewest expected number of cells not occupied by agents. Then the resource utilization of a moving multi-agent becomes higher than the resource utilization of a moving multi-agent without average moving speed, and can be accelerated by giving moving speed for agents. The above results are based on both theory and experiments, and we present the optimal moving speed of the system-specific for the acceleration.

In our real world, there are a lot of unusual beings with unexpected phenomena that are beyond human understanding. A multi-agent behavior is one of them, while it is quite difficult to analyze the behavior of multi-agents in general theoretical frameworks, because there are interactions or coordination among agents.

In statistical mechanics of physics, we know an Ising model that is based on a uniform structure with the interactions among specific ranged cells [4]. Ising model has been well studied as an interaction of substances, and can be found a lot of works. A distinct model at first glance, but it may be similar to Ising model, is a fluctuation of image analysis in computer graphics [11]. A beautiful one is bootstrap in statistics [6]. These historical and excellent works are described as a strange behavior in probabilistic models, and can be reviewed from the viewpoint of multi-agent models.

We are in need of a simple model with no fat in moving multi-agents. Fortunately, Sen et al. [21] proposed a basic simple model of moving multi-agents, and Rustogi et al. [20] presents the fundamental results of the former model. These models are related to Bak, Tang and Wiesenfeld (BTW) model [1] in the sense of multi-agents. Ishiduka et al. [10] also introduced a time lag and showed the relationship between time lag and stability in moving multi-agents. The above models are intended to clarify how fast the moving multi-agents fall into a complete stable state, i.e. a hole state in absorbing Markov chain [8], thus the goal is to design a coordinative system which falls into a stable hole in shorter passage time as soon as possible.

On the other hand, Toyabe et al. [24] experimentally demonstrates that information-to-energy conversion is possible in an autonomous single stochastic moving agent. The idea is that if an agent goes up the spiral stairs during stochastic movements, it sets the stopper on the stairs so that the agent does not to come down. This approach needs an explicit control that the agent does not come down the spiral stairs. It is single agent against multi-agents, and our ultimate goal is to extract the energy from stochastic moving multi-agents.

Our model, Multi-Agent behavior with Time delay and Moving Speed: MATMS, is based on Sen et al. [21] and the developed model with time-delay proposed by Rustogi and Singh [20]. Then, we note that our purpose is different from the papers [21, 20, 10] which try to clarify the relationships between time-delay and stability in multi-agent systems. In other words, the papers try to find the multi-agent configurations satisfying autonomous uniform resource allocation in a shortest passage time. On the other hand, our multi-agents initially start from a most stable state, each agent on resource stochastically moves over cells, and it just likes atoms in a liquid. The agents are always moving on resources stochastically, and they never stop. In addition, we extend their models to have moving average speed. Our model satisfies Markov condition so that the configurations do not depend on the initial configurations in the limit, and our problem is to find more stable multi-agent configurations accompanying agent movements. It just likes as a molecule has an energy so that it is always moving, and it depends on the manner of substances. In this paper, we show that a stochastic moving multi-agent system, whose the agents move slowly as a whole on average, is more stable than other ones not having average moving speed as a whole theoretically and experimentally.

You may ask us the following question. How to give a moving speed of multi-agents? The moving speed is naturally caused by stochastic moving multi-agent behavior, if individual objects in nature are governed by the minimum entropy production principle (also see the paper [23]). For a given multi-agent, we note that the speed for efficient use of the resource does not exactly match the speed for more stable behavior by comparing the two results of this paper..

This paper is organized as the following. First, we review our former models in the following section briefly. In Section 3, we present our model. Section 4 shows that there exists a new distinct behavior based on both theory and experiments. In Section 5, we discuss the resource utilization of our model. In the following section, we discuss the related works. Finally, we conclude this paper in Section 7.

2 A MULTI-AGENT MODEL WITH TIME-DELAY

We suppose a multi-agent consisting of k agents, k [greater than or equal to] 2. All the agents are arranged over a finite resource consisting of cells S (i), i = 1, ..., n on a[right arrow] circle [1, n] as in [21, 20], and moves to synchronize over the resource according to the following transition probability piyj in stochastic manner. We first define a weight function [f.sub.i,j], i, j = 1, ..., n as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.1)

where [r.sub.i] are the number of agents on i-th cell, and [alpha], [beta] and [gamma] are constants. [alpha] is called an "inertia" which is the tendency of an agent to stay in its resource [20]. A moving transition probability [p.sub.i,j] from a cell S(i) to a destination cell S(j) based on [f.sub.i,j] is defined by the normalization of [f.sub.i,j] with probability 1 as

[p.sub.i,j] = [f.sub.i,j]/[[summation].sub.k][f.sub.i,k], i, j, = 1, ..., n (2.2)

Rustogi et al. [20] introduced a window win(i) with a fixed size for analyzing the behavior of multi-agent systems with time-delay. Then, a moving transition probability [p.sub.i,j] from a current cell S(i) to a destination cell S(j) is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2.3)

where w is a window size, and win (i) is the set [i - w, i + w]. A time delay which is local properties is proportional to the window size w (see [20, 10]).

There are no constraints on the moving of agents such that each cell has a fixed upper limit capacity to occupy agents, while there is another constraint in the model, i.e. the moving transition probability [p.sub.i,j] is 0 if the number of agents on a cell S(i) is less than the number of agents on a destination cell S(j).

[FIGURE 2.1 OMITTED]

In the following, we are simply expressed as i a resource S(i).

3 PROPOSED MODEL: MATMS, A MULTI-AGENT MODEL WITH TIME-DELAY AND MOVING SPEED

Our proposed model, Multi-Agent behavior with Time delay and Moving Speed: MATMS, is similar to the models [20, 10]. But, there are some differences between MATMS and [20, 10], and we describe them in the following.

[FIGURE 3.1 OMITTED]

The resources in MATMS are arranged over a straight line [1, n] as in [10], and the wind function win(i) is the set [i - w, i + w] [intersection] [1, n] if w is a window size. The former weight function [f.sub.i,j] in Section 2 (2.1) is replaced by the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.1)

where move is an accelerated function to give average moving speed on either left or right defined by (3.2) and (3.3) in later. Note that the difference compared with (2.1) is "[r.sub.i] < [r.sub.j]". We can easily check that Rustogi et al. model [20] does not satisfy Markov property, while our model satisfies Markov property under the condition not to restrict the moving directions of agents, and the model becomes irreducible. We discuss it in later.

The last different point is that our model has a moving average speed. Every agents move with a fixed average speed s (s [greater than or equal to] 1) either of left or right directions on the cells along the resources arranged over the straight line according to the probability [p.sub.i,j] in stochastic manner. In the case of a right average moving speed s, the function move(x,i,j), which describes the ratio of imbalance from a cell i to a destination cell j with the difference x (= [r.sub.i] - [r.sub.j]) in the numbers of agents on i and j, is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.2)

wheres s is an average moving speed. On the other hand, for the left average moving, move is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.3)

The function move (x, i, j) is not symmetric with respect to x, and s represents the ratio of imbalance in the function move. As a special case, move(X, i, j) becomes symmetric with respect to x and the agents do not move on average if s = 1. The average moving directions are inverted with the same average moving speed if each agent arrives at the leftmost or rightmost cells, i.e. the cells on boundaries. All the cells have the same average moving speed, and each agent moves towards either left or right directions on average independently. Thus, all the agents are randomly choosing the moving directions which are apart from the effect on the left and right boundaries.

We note that there are two choices on the moving average directions which are either left or right. Suppose an agent moves towards left on average at the previous step. Which is the moving direction at the next step? If we exclude the cases that the agents stay on boundaries, there are two exclusive cases (or a model protocol) for each agent independently: (1) we inherit the directions at the previous steps, i.e. left on average in above, or (2) we randomly select it at each step according to even probability either left or right, i.e. half to half rule for the direction. The second case (2) is suite to Markov property. The first case (1) does not satisfy Markov property so that the systems depend on the initial configurations.

A state of agents is a tuple of an agent name and a cell name staying the agent. For an example, ([a.sub.1], 1) is the agent [a.sub.1] staying at the cell 1. A multi-agent state is a set of agent states, for an example, [([a.sub.1], 1), ([a.sub.2], 1), ([a.sub.3], 2)] represents that the agents [a.sub.1], [a.sub.2] and [a.sub.3] are staying the cells 1, 1 and 2, respectively. An agent moving configuration is a multi-agent state in conjunction with average moving directions. For an example, [([a.sub.1], r, 1), ([a.sub.2], r, 1), ([a.sub.3], l, 2)] is the multi-agent state, and their correspondence agent directions of the state are right, right and left, respectively.

4 THEORETICAL AND EXPERIMENTAL STABLE ANALYSIS OF 3 x 3 MODEL

In this section, we discuss a concrete moving multi-agent such that a multi-agent having an appropriate average speed is more stable than a multi-agent not having moving average speed, i.e. staying the current cell on average.

Suppose the multi-agent of which the number of cells and agents are 3 together. This is a minimal model to examine a coordination among agents. We first use the parameter values [alpha] = 5, [beta] = 2 and [gamma] =1, and fix the window size w to 1.

Suppose 1, 2 and 3 are their cell names(Figure 4.1). We do not distinguish the names among the agents for the simplicity, and represent it just a. Suppose all the agents have the same average moving speed s (s [greater than or equal to] 1). The moving directions of the agents are randomly selected either left l or right r in half and half at every steps. The multi-agent state is a set of three agent states. [(a, 1), (a, 2), (a, 2)] is an example of the multi-agent states, where a are agents and 1, 2 and 3 are the resources.

An example of the agent moving configuration is represented by (a, r, 1) if the agent a on the cell 1 moves towards right with average moving speed s. The multi-agent moving configuration consists of three agent configurations in this minimal model. For an example, [(a, r, 1), (a, r, 1), (a, r, 1)] is a multi-agent moving configuration.

In our minimal model, the directions of the stochastic moving agents are stochastically chosen at every steps so that the multi-agent becomes Markov chain. In this setting, there are 10 multi-agent states shown in Figure 4.1, and we must consider 136 probabilistic transition rules shown in Appendix A. That is, the number of the states (Figure 4.1), the state transition rules (Appendix A) and the multi-agent moving configurations (The top items of Appendix A) are 10, 136 and 20, respectively. The illustration of the transition rules (d-1) in Appendix A is shown in Figure 4.2. The more details of (g-2) consisting 27 rules in Appendix A are illustrated in Figure 4.3.

[FIGURE 4.1 OMITTED]

This simple model satisfies Markov condition and it is irreducible, so we easily compute the eigenvectors of the state transition matrix with the size 10 x 10 using Appendix A, and compute the transition probabilities among every states in the limit by changing the moving speed s. The theoretical computational results of the existence probabilities for every states are shown in Table 4.4.

[FIGURE 4.2 OMITTED]

[FIGURE 4.3 OMITTED]

The expected averages and variances with respect to the number of the agents on the cell 1 are [m.sub.1] and [v.sub.1], respectively, given by the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.2)

where [p.sub.i] are the probabilities of the correspondence states i shown in Table 4.4.

By similar way, we can compute the expected averages ([m.sub.1] and [m.sub.2]) and variances ([[upsilon].sub.2] and [[upsilon].sub.3]) with respect to the numbers of agents on the cells 2 and 3:

[m.sub.2] = 3[p.sub.7] + 2([p.sub.4] + [p.sub.8]) + [p.sub.2] + [p.sub.5] + [p.sub.9], (4.3)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (4.4)

[m.sub.3] = 3[p.sub.10] + 2([p.sub.6] + [p.sub.9]) + [p.sub.3] + [p.sub.5] + [p.sub.8], (4.5)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (4.6)

The expected variance [upsilon]a with respect to the number of agents over the resource is computed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.7)

The theoretical expected means and variances of agents at s = 1 and s = 3 are shown in Table 4.5. We make the comparisons of the theoretical analysis and the experimental results. Then, we initially configure the multi-agent which is staying most stable state, i.e. the state (5) in Figure 4.1, and observed it during 5,000 steps. We computed the variance with respect to the number of agents between 1,001 and 5,000 steps, repeat it 5 times for each, and take their mean values. The results are shown in Table 4.6. To estimate the means and variances of very small number, three, of agents are severely difficult [15]. Therefore, there are gaps between the theoretical and experimental results. For the same reason, the variances are greater on the boundaries in theoretical case, rather than a boundary effect.

Figure 4.7 shows the theoretical and experimental variances of our minimal model by changing the moving speed, respectively. The variances of the resources become larger in both lower and higher for speed. As a result, there is a most stable multi-agent configuration with the speed s = 3.0 around in both theoretical calculations and experimental results. The last theoretical calculations are to find the speed s which minimize the variances [upsilon]a (4.7) by changing for [alpha]. The results are figured in Figure 4.8.

[FIGURE 4.7 OMITTED]

We have discussed a small model of multi-agent on the cells over a straight line bounded a closed restricted region [10], and the model properly reflects our real world rather than the circle model [20]. Most models discussed how to coordinate or to control multi-agents systems within shortest steps. However, the real systems, i.e. animals, atoms and so on, are always dynamically moving with interactions for each other. In real world, all the agents always move every times, therefore our model is natural.

[FIGURE 4.8 OMITTED]

The speed either of left or right in this paper is similar to a spin in statistical mechanics of physics. The variations with respect to the number of agents correspond to the precisions in corporation [20].

5 THEORETICAL AND EXPERIMENTAL ANALYSIS OF 3 x 3 MODEL FOR RESOURCE UTILIZATION

In this section, we discuss the same concrete moving multi-agent in Section 4, we try to find an appropriate average speed which brings us higher resource utilization than a multi-agent not having average moving speed. We first use the parameter values [alpha] = 12, [beta] = 2 and [gamma] = 1, and fix the window size w to 1 . The theoretical computational results of the existence probabilities for every states at the speed s = 1 and s = 8 are shown in Table 5.1.

The expected average number mi of the cell 1 occupied by agents is given by the following:

[m.sub.1] = [p.sub.1] + [p.sub.2] + [p.sub.3] + [p.sub.4] + [p.sub.5] + [p.sub.6], (5.1)

where [p.sub.i] are the probabilities of the correspondence states i shown in Table 5.1.

By similar way, we can compute the expected average number [m.sub.2] and [m.sub.3] of cells 2 and 3, respectively, occupied by agents:

[m.sub.2] = [p.sub.2] + [p.sub.4] + [p.sub.5] + [p.sub.7] + [p.sub.8] + [p.sub.9], (5.2)

[m.sub.3] = [p.sub.3] + [p.sub.5] + [p.sub.6] + [p.sub.8] + [p.sub.9] + [p.sub.10]. (5.3)

The whole expected average number [upsilon]m of cells occupied by agents, i.e. the resource utilization, is given as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (5.4)

The theoretical expected average number of agents at s = 1 and s = 8 are shown in Table 5.2. We make the comparisons of the theoretical analysis and the experimental results. Then, we initially configure the multi-agent of which every cells are occupied by agents in the experiments, i.e. the state (5) in Figure 4.1, and observed it during 5,000 steps. We computed the resource utilization with respect to the number of occupied cells between 1,001 and 5,000 steps, repeat it 5 times for each, and take their mean values. The results are shown in Table 5.3.

Figure 5.4 shows the theoretical and experimental expected number of cells occupied by agents of our minimal model by changing the moving speed, respectively. The utilization of the resources becomes lower in both lower and higher for speed. Our result is that there is the most high resource utilization of multi-agent configuration with the speed s = 8.0 around in both theoretical calculations and experimental results, i.e. the both curves take the maximum values at s = 8 around. Then, we note that the parameters [alpha], [beta] and [gamma] are fixed to 12, 2 and 1. respectively.

[FIGURE 5.4 OMITTED]

The last theoretical calculations are to find the speed s which maximize the resource utilization [upsilon]m (5.4) by changing for a. The results are figured in Figure 5.5.

6 RELATED WORKS

Sen et al. [21] presented our basic model, and Rustogi et al. [20] also proposed the their extended model with time delay and presented the excellent results. Ishiduka et al. [10] introduced a time lag for the propagation speed explicitly in addition to a window, and showed the relationships between stability and time lags. We note that Sen and Rustogi models employ the resources on circles. On the other hand, the resources of Ishiduka model are on a straight line. A straight line of resources are more realistic and natural compares to a circle. How's the boundary effect? How's the circular effect?

[FIGURE 5.5 OMITTED]

There are a lot of discussions on the stability of multi-agents. Chlie et al. [5] tries to find time Markov chains to be stable when its state has converged to an equilibrium distribution. Bracciali et al. [3] presents an abstract declarative semantics for multi-agent systems based on the idea of stable set. Moreau [17] discusses the compelling model of network of agents interacting via time-dependent communication links. Finke and Passino [7] discusses a behavior of a group of agents and their interactions in a shared environment. Lee et al. [12] considers the kinematicd based dynamics-based flocking model on graphs, and the model of the behavior is unstable. They proposed a stable information framework for multi-agents. Mohanarajah and Hayakawa [16] discusses the formation control of multi-agent dynamical systems in the case of limitation on the number of communication channels. Hirayama et al. [9] introduced the distributed Lagrangian protocol for finding the solutions of distributed systems. These papers are intended to control the multi-agent systems in corporative stable states. However, our model is one of the natural models to achieve the coordination without controls and without communication among agents.

From the viewpoint of multi-agent coordination, consensus or agreement, Lessor et al. [14] proposed a domain-independent generic agent architecture for the next generation of multi-agent systems. Shehory et al.[22] addresses the implementation of distributed coalition formation algorithms within a real-world multi-agent system. Lee et al. [13] Jadbabaie et al. showed that all agents move in the same heading, provided that these agents are periodically linked together. Robertson et al.[19] proposed that a novel style of multi-agent system specification and deployment is described, in which familiar methods from computational logic are re-interpreted to a new context. Beard and Atkins [18] provides a survey of consensus problems in multi-agent cooperative control with the goal of promoting research in this area.

7 CONCLUSIONS

In this paper, we considered a stochastic moving multi-agent model, and presented that the model, Multi-Agent behavior with Time delay and Moving Speed: MATMS, having appropriate average moving speed is stable more than ones not having average moving speed. This shows that each agent needs the moving acceleration to stay more stable states. The acceleration is a speed in this paper.

We also discussed the resource utilization of the multi-agents, and presented that the model having appropriate average moving speed enables us higher resource utilization than ones not having average moving speed. This shows that each agent needs the moving acceleration to stay high usage of cells.

In our model, we showed that there is an appropriate speed to achieve the most stable or utilizable configuration for each inertia [alpha].

Since individual objects in nature are governed by the lower entropy, all the objects which move randomly with interactions over resources cause naturally flow. Then, each object may independently take a different direction for moving rather than coordination, i.e. no need to control agents. We may extract the energy from the multi-agents which move randomly in a closed region, then we may think them just like atoms. This kind of work has done by Toyabe et al. [24] in single agent.

A THE STATE TRANSITION RULES AND THEIR PROBABILITIES OF 3 x 3 MODEL

First, let us define the auxiliary function f(x) as

f(x) = 1 - [1/1 + [gamma] x exp((x - [alpha])/[beta])],

where cells = 3, agents = 3 and w = 1.

(a) The multi-agent moving configuration [(a, r, 1), (a, r, 1), (a, r, 1)] with the conditional probability [cp.sub.a] = 1 moves to one of the following states in accordance with the transition probabilities.

(1) [(a, 1), (a, 1), (a, 1)], 1/[(1 + f(3S)).sup.3]

(2) [(a, 1), (a, 1), (a, 2)], 3 f(3s)/[(1 + f(3s)).sup.3]

(4) [(a, 1), (a, 2), (a, 2)], 3 f[(3s).sup.2]/[(1 + f(3s)).sup.3]

(7) [(a, 2), (a, 2), (a, 2)], f[(3s).sup.3]/[(1 + f(3s)).sup.3]

(b-1) [(a, r, 1), (a, r, 1), (a, l, 2)], [cp.sub.b1] = 1/2

(2) [(a, 1), (a, 1), (a, 2)], [1/[(1 + f(s)).sup.2](1 + f(1))]

(3) [(a, 1), (a, 1), (a, 3)], [f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(4) [(a, 1), (a, 2), (a, 2)], 2 [f(s)/[(1 + f(s)).sup.2](1 + f(1))]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(s)f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(7) [(a, 2), (a, 2), (a, 2)], [f[(s).sup.2]/[(1 + f(s)).sup.2](1 + f(1))]

(8) [(a, 2), (a, 2), (a, 3), [f[(s).sup.2]f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(b-2) [(a, r, 1), (a, r, 1), (a, r, 2)], [cp.sub.b2] = 1/2

(2) [(a, 1), (a, 1), (a, 2)], 1/[(1 + f(s)).sup.3]

(3) [(a, 1), (a, 1), (a, 3)], f(s)/[(1 + f(s)).sup.3]

(4) [(a, 1), (a, 2), (a, 2)], 2 [f(s)/[(1 + f(s)).sup.3]]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f[(s).sup.2]/[(1 + f(s)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], [f[(s).sup.2]/[(1 + f(s)).sup.3]]

(8) [(a, 2), (a, 2), (a, 3)], [f[(s).sup.3]/[(1 + f(s)).sup.3]]

(c) [(a, r, 1), (a, r, 1), (a, l, 3)], [cp.sub.c] = 1

(2) [(a, 1), (a, 1), (a, 2)], [f(s)/[(1 + f(2s)).sup.2](1 + f(s))]

(3) [(a, 1), (a, 1), (a, 3)], [1/[(1 + f(2s)).sup.2](1 + f(s))]

(4) [(a, 1), (a, 2), (a, 2)], 2 [f(2s)f(s)/[(1 + f(2s)).sup.2](1 + f(s))]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2s)/[(1 + f(2s)).sup.2](1 + f(s))]

(7) [(a, 2), (a, 2), (a, 2)], [f[(2s).sup.2]f(s)/[(1 + f(2s)).sup.2](1 + f(s))]

(8) [(a, 2), (a, 2), (a, 3)], [f[(2s).sup.2]/[(1 + f(2s)).sup.2](1 + f(s))]

(d-1) [(a, r, 1), (a, r, 2), (a, r, 2)], [cp.sub.dl] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], f[(1).sup.2]/[(1 + f(2s) + f(1)).sup.2]

(2) [(a, 1), (a, 1), (a, 2)], 2 [f(1)/[(1 + f(2s) + f(1)).sup.2]]

(3) [(a, 1), (a, 1), (a, 3)], 2 [f(2s)f(1)/[(1 + f(2s) + f(1)).sup.2]]

(4) [(a, 1), (a, 2), (a, 2)], 1/[(1 + f(2s) + f(1)).sup.2]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2s)/[(1 + f(2s) + f(1)).sup.2]]

(6) [(a, 1), (a, 3), (a, 3)], [f[(2s).sup.2]/[(1 + f(2s) + f(1)).sup.2]]

(d-2) [(a, r, 1), (a, l, 2), (a, l, 2)], [cp.sub.d2] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], [f[(s).sup.2]/[(1 + f(s) + f(2)).sup.2]]]

(2) [(a, 1), (a, 1), (a, 2)], 2 [f(s)/[(1 + f(s) + f(2)).sup.2]]]

(3) [(a, 1), (a, 1), (a, 3)], 2 [f(s)f(2)/[(1 + f(s) + f(2)).sup.2]]]

(4) [(a, 1), (a, 2), (a, 2)], 1/[(1 + f(s) + f(2)).sup.2]]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2)/[(1 + f(s) + f(2)).sup.2]]]

(6) [(a, 1), (a, 3), (a, 3)], [f[(2).sup.2]/[(1 + f(s) + f(2)).sup.2]]]

(d-3) [(a, r, 1), (a, l, 2), (a, r, 2)], [cp.sub.d3] = 1/2

(1) [(a, 1), (a, 1), (a, 1)], [f(s)f(1)/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(2) [(a, 1), (a, 1), (a, 2)], [f(s) + f(1)/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(3) [(a, 1), (a, 1), (a, 3)], [f(s)f(2s) + f(1)f(2)/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(4) [(a, 1), (a, 1), (a, 3)], [1/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(5) [(a, 1), (a, 2), (a, 3)], [f(2s)+f(2)/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(6) [(a, 1), (a, 2), (a, 3)], [f(2s)f(2)/(1 + f(s) + f(2))(1 + f(1) + f(2s))]

(e-1) [(a, r, 1), (a, l, 2), (a, l, 3)], [cp.sub.el] = 1/2

(2) [(a, 1), (a, 1), (a, 2)], [f[(0).sup.2]/[(1 + f(0)).sup.2](1 + 2f(0))]

(3) [(a, 1), (a, 1), (a, 3)], [f(0)/[(1 + f(0)).sup.2](1 + 2f(0))]

(4) [(a, 1), (a, 2), (a, 2)], [f(0) + f[(0).sup.3]/[(1 + f(0)).sup.2](1 + 2f(0))]

(5) [(a, 1), (a, 2), (a, 3)], [1 + 2f[(0).sup.2]/[(1 + f(0)).sup.2](1 + 2f(0))]

(6) [(a, 1), (a, 3), (a, 3)], [f(0)/[(1 + f(0)).sup.2](1 + 2f(0))]

(7) [(a, 2), (a, 2), (a, 2)], [f[(0).sup.2]/[(1 + f(0)).sup.2](1 + 2f(0))]

(8) [(a, 2), (a, 2), (a, 3)], [f(0) + f[(0).sup.3]/[(1 + f(0)).sup.2](1 + 2f(0))]

(9) [(a, 2), (a, 3), (a, 3)], [f[(0).sup.2]/[(1 + f(0)).sup.2](1 + 2f(0))]

(e-2) [(a, r, 1), (a, r, 2), (a, l, 3)], [cp.sub.e2] = 1/2. This case is identical to (e-1).

(f) [(a, r, 1), (a, l, 3), (a, l, 3)], [cp.sub.f] = 1

(4) [(a, 1), (a, 2), (a, 2)], [f[(2s).sup.2]/(1 + f(s))[(1 + f(2s)).sup.2]]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2s)/(1 + f(s))[(1 + f(2s)).sup.2]]

(6) [(a, 1), (a, 3), (a, 3)], [1/(1 + f(s))[(1 + f(2s)).sup.2]]

(7) [(a, 2), (a, 2), (a, 2)], [f(s)f[(2s).sup.2]/(1 + f(s))[(1 + f(2s)).sup.2]]

(8) [(a, 2), (a, 2), (a, 3)], 2 [f(s)f(2s)/(1 + f(s))[(1 + f(2s)).sup.2]]

(9) [(a, 2), (a, 3), (a, 3)], [f(s)/(1 + f(s))[(1 + f(2s)).sup.2]]

(g-1) [(a, l, 2), (a, l, 2), (a, l, 2)], [cp.sub.g1] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], f[(3s).sup.3]/[(1 + f(3s) + f(3)).sup.3]

(2) [(a, 1), (a, l), (a, 2)], 3 [f[(3s).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(3) [(a, 1), (a, 1), (a, 3)], 3 [f[(3s).sup.2]f(3)/[(1 + f(3s) + f(3)).sup.3]]

(4) [(a, 1), (a, 2), (a, 2)], 3 [f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(5) [(a, 1), (a, 2), (a, 3)], 6 [f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(6) [(a, 1), (a, 3), (a, 3)], 3 [f(3s)f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], 1/[(1 + f(3s) + f(3)).sup.3]

(8) [(a, 2), (a, 2), (a, 3)], 3 [f(3)/[(1 + f(3s) + f(3)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], 3 [f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(10) [(a, 3), (a, 3), (a, 3)], [f[(3).sup.3]/[(1 + f(3s) + f(3)).sup.3]]

(g-2) [(a, l, 2), (a, r, 2), (a, r, 2)], [cp.sub.g2] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], [f(3s)f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(2) [(a, 1), (a, 1), (a, 2)], [f[(3).sup.2] + 2f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(3) [(a, 1), (a, 1), (a, 3)], [f[(3).sup.3] + 2f[(3s).sup.3]f(3)/[(1 + f(3s) + f(3)).sup.3]]

(4) [(a, 1), (a, 2), (a, 2)], [f(3s) + 2f(3)/[(1 + f(3s) + f(3)).sup.3]]

(5) [(a, 1), (a, 2), (a, 3)], [2f(3s)f(3) + 2f[(3).sup.2] + 2f[(3s).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(6) [(a, 1), (a, 3), (a, 3)], [f[(3s).sup.3] + 2f[(3).sup.2]f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], [1/[(1 + f(3s) + f(3)).sup.3]]

(8) [(a, 2), (a, 2), (a, 3)], [f(3) + 2f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], [f[(3s).sup.2] + 2f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(10) [(a, 3), (a, 3), (a, 3)], [f(3)f[(3s).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(g-3) [(a, l, 2), (a, r, 2), (a, l, 2)], [cp.sub.g3] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], [f[(3s).sup.2]f(3)/[(1 + f(3s) + f(3)).sup.3]]

(2) [(a, 1), (a, 1), (a, 2)], [f[(3).sup.2] + 2f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(3) [(a, 1), (a, 1), (a, 3)], [f[(3).sup.3] + 2f[(3s).sup.2]f(3)/[(1 + f(3s) + f(3)).sup.3]]

(4) [(a, 1), (a, 2), (a, 2)], [f(3s) + 2f(3)/[(1 + f(3s) + f(3)).sup.3]]

(5) [(a, 1), (a, 2), (a, 3)], [2f(3s)f(3) + 2f[(3).sup.2] + 2f[(3s).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(6) [(a, 1), (a, 3), (a, 3)], [f[(3s).sup.3] + 2f[(3).sup.2]f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], 1/[(1 + f(3s) + f(3)).sup.3]

(8) [(a, 2), (a, 2), (a, 3)], [f(3) + 2f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], [f[(3s).sup.2] + 2f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(10) [(a, 3), (a, 3), (a, 3)], [f(3s)f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(g-4) [(a, r, 2), (a, r, 2), (a, r, 2)], [cp.sub.g4] = 1/4

(1) [(a, 1), (a, 1), (a, 1)], [f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(2) [(a, 1), (a, 1), (a, 2)], 3 [f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(3) [(a, 1), (a, 1), (a, 3)], 3 [f(3s)f[(3).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(4) [(a, 1), (a, 2), (a, 2)], 3 [f(3)/[(1 + f(3s) + f(3)).sup.3]]

(5) [(a, 1), (a, 2), (a, 3)], 6 [f(3s)f(3)/[(1 + f(3s) + f(3)).sup.3]]

(6) [(a, 1), (a, 3), (a, 3)], 3 [f[(3s).sup.2]f(3)/[(1 + f(3s) + f(3)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], 1/[(1 + f(3s) + f(3)).sup.3]

(8) [(a, 2), (a, 2), (a, 3)], 3 [f(3s)/[(1 + f(3s) + f(3)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], 3 [f[(3s).sup.2]/[(1 + f(3s) + f(3)).sup.3]]

(10) [(a, 3), (a, 3), (a, 3)], [f[(3s).sup.3]/[(1 + f(3s) + f(3)).sup.3]]

(h-1) [(a, l, 2), (a, l, 2), (a, l, 3)], [cp.sub.hl] = 1/4

(3) [(a, 1), (a, 1), (a, 3)], [f[(2s).sup.2]/[(1 + f(2s) + f(1)).sup.2]]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2s)/[(1 + f(2s) + f(1)).sup.2]]

(6) [(a, 1), (a, 3), (a, 3)], 2 [f(2s)f(1)/[(1 + f(2s) + f(1)).sup.2]]

(8) [(a, 2), (a, 2), (a, 3)], 1/[(1 + f(2s) + f(1)).sup.2]

(9) [(a, 2), (a, 3), (a, 3)], 2 [f(1)/[(1 + f(2s) + f(1)).sup.2]]

(10) [(a, 3), (a, 3), (a, 3)], [f[(1).sup.2]/[(1 + f(2s) + f(1)).sup.2]]

(h-2) [(a, l, 2), (a, r, 2), (a, l, 3)], [cp.sub.h2] = 1/2

(3) [(a, 1), (a, 1), (a, 3)], [f(2s)f(2)/(1 + f(2s) + f(1))(1 + f(s) + f(2))]

(5) [(a, 1), (a, 2), (a, 3)], [f(2s) + f(2)/(1 + f(2s) + f(1))(1 + f(s) + f(2))]

(6) [(a, 1), (a, 3), (a, 3)], [f(2s)f(s) + f(1)f(2)/(1 + f(2s) + f(1))(1 + f(s) + f(2))]

(8) [(a, 2), (a, 2), (a, 3)], [1/(1 + f(2s) + f(1))(1 + f(s) + f(2))]

(9) [(a, 2), (a, 3), (a, 3)], [f(1) + f(s)/(1 + f(2s) + f(1))(1 + f(s) + f(2))]

(10) [(a, 3), (a, 3), (a, 3)], [f(s)f(1)/ (1 + f(2s) + f(1))(1 + f(s) + f(2))]

(h-3) [(a, r, 2), (a, r, 2), (a, l, 3)], [cp.sub.h3] = 1/4

(3) [(a, 1), (a, 1), (a, 3)], [f[(2).sup.2]/[(1 + f(s) + f(2)).sup.2]]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(2)/[(1 + f(s) + f(2)).sup.2]]

(6) [(a, 1), (a, 3), (a, 3)], 2 [f(s)f(2)/[(1 + f(s) + f(2)).sup.2]]

(8) [(a, 2), (a, 3), (a, 3)], 1/[(1 + f(s) + f(2)).sup.2]

(9) [(a, 2), (a, 3), (a, 3)], 2 [f(s)/[(1 + f(s) + f(2)).sup.2]]

(10) [(a, 3), (a, 3), (a, 3)], [f[(s).sup.2]/[(1 + f(s) + f(2)).sup.2]]

(i-1) [(a, l, 2), (a, l, 3), (a, l, 3)], [cp.sub.il] = 1/2

(4) [(a, 1), (a, 2), (a, 2)], f[(s).sup.3]/[(1 + f(s)).sup.3]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f[(s).sup.2]/[(1 + f(s)).sup.3]]

(6) [(a, 1), (a, 3), (a, 3)], [f(s)/[(1 + f(s)).sup.3]]

(7) [(a, 2), (a, 2), (a, 2)], [f[(s).sup.2]/[(1 + f(s)).sup.3]]

(8) [(a, 2), (a, 2), (a, 3)], 2 [f(s)/[(1 + f(s)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], 1/[(1 + f(s)).sup.3]

(i-2) [(a, r, 2), (a, l, 3), (a, l, 3)], [cp.sub.i2] = 1/2

(4) [(a, 1), (a, 2), (a, 2)], [f[(s).sup.2]f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(5) [(a, 1), (a, 2), (a, 3)], 2 [f(s)f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(6) [(a, 1), (a, 3), (a, 3)], [f(1)/[(1 + f(s)).sup.2](1 + f(1))]

(7) [(a, 2), (a, 2), (a, 2)], [f[(s).sup.2]/[(1 + f(s)).sup.2](1 + f(1))]

(8) [(a, 2), (a, 2), (a, 3)], 2 [f(s)/[(1 + f(s)).sup.2](1 + f(1))]

(9) [(a, 2), (a, 3), (a, 3)], [1/[(1 + f(s)).sup.2](1 + f(1))]

(j) [(a, l, 3), (a, l, 3), (a, l, 3)], [cp.sub.j] = 1

(7) [(a, 2), (a, 2), (a, 2)], [f[(3s).sup.3]/[(1 + f(3s)).sup.3]]

(8) [(a, 2), (a, 2), (a, 3)], 3 [f[(3s).sup.2]/[(1 + f(3s)).sup.3]]

(9) [(a, 2), (a, 3), (a, 3)], 3 [f(3s)/[(1 + f(3s)).sup.3]]

(10) [(a, 3), (a, 3), (a, 3)], 1/[(1 + f(3s)).sup.3]

REFERENCES

[1] P. Bak, C. Tang and K. Wiesenfeld, Self-organized criticality: An explanation of the 1/f noise, Phys. Rev. Lett., 59, 381-384 (1987).

[2] A. Bracciali, P. Mancarella and K. Stathis, Stable Multi-agent Systems, LNAI 3451, 322-334 (2005).

[3] A. Bracciali, P. Mancarella, K. Stathis and F. Toni, Stable Multi-agent Systems, Lecture Notes in Artificial Intelligence 3451, 322-334, Springer (2004).

[4] S.G. Brush, History of the Lez-Ising Model, Reviews of Modern Physics, 39, 4 (1967).

[5] M. Chli, P.D. Wilde, J. Goossenaerts, V. Abramov, N. Szirbik, L. Correia, P. Mariano and R. Ribeiro, Stability of Multi-Agent Systems, IEEE International Conference on Systems, Man and Cybernetics, 551-556 (2003).

[6] B. Efron, Bootstrap Method: Another Look at the Jackknife, The Annals of Statistics, 7, 1, 1-26(1979).

[7] J. Finke and K.M. Passino, Stable Cooperative Multiagent Spatial Distributions, IEEE Conference on Decision and Control, (2005).

[8] C.M. Grinstead and J. L. Snell, Introduction to Probability, American Mathematical Society (1997).

[9] K. Hirayama, T. Matsui and M. Yokoo, Adaptive Price Update in Distributed Lagrangian Relaxation Protocol, Autonomous Agents and Multiagent Systems (2009).

[10] Y. Ishiduka and K. Iwanuma, A Relationship between Time Delay and the Amount of Knowlege in Distributed Cooperation of Multi-Agent Systems, Japanese, IEICE, D-I, J86-I, 2, 117-120 (2003).

[11] L.D. Landau and E.M. Lifshitz, Statistical Physics (3rd ed.), Pergamon Press (1985).

[12] D. Lee and M.W. Spong, Stable Flocking of Multiple Inertial Agents on Balanced Graphs, American Control Conference (2006).

[13] S. Li and H. Wang, Multi-agent coordination using nearest-neighbor rules: revisiting the Vicsek model, arXiv:cs/0407021 (2004).

[14] V.R. Lessor, Reflections on the Nature of Multi-Agent Coordination and Its Implications for an Agent Architecture, Kluwer Academic Publishers, Autonomous Agents and Multi-Agent Systems, 1, 89-111 (1998).

[15] J.S. Maritz and R.G. Jarrett, A Note on Estimating the Variance of the Sample Median, Journal of the American Statistical Association, 72, 361 (1978).

[16] G. Mohanarajah and T. Hayakawa, Formation Stability of Multi-Agent Systems with Limited Information, American Control Conference (2008).

[17] L. Moreau, Stability of Multiagent Systems with Time-Dependent Communication Links, IEEE Transaction on Automatic Control, 50, 2 (2005).

[18] W. Ren, R.W. Beard and E.M. Atkins, A Survey of Consensus Problems in Multiagent Coordination, American Control Conference (2005).

[19] D. Robertson, Multi-agent Coordination as Distributed Logic Programming, ICLP, LNCS 3132, 416-430 (2004).

[20] S.K. Rustogi and M.P. Singh, Be Patient and Tolerate Imprecision: How Autonomous Agents can Coordinate Effectively, 6th IJCAI (1999).

[21] S. Sen, S. Roychowdhury and N. Arona, Effects of local information on group behavior, Proceedings of the Second International Conference on Multiagent Systems, AAAI (1996).

[22] O.M. Shehory, K. Sycara and S. Jha, Multi-Agent Coordination through Coalition Formation, Lecture Notes in Artificial Intelligence 1365, 143-154. Springer (1997).

[23] I. Shioya and T. Miura, An Accelerated Coordination of Stochastic Multi-Agents by Moving Speed, The Second International Conference on Digital Information Processing and Communications, IEEE, July 10-12, 155-160 (2012).

[24] S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki and M. Sano, Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality, Nature Physics, 6, 988-992 (2010).

Isamu Shioya

Hosei University, Kajino-cho 3-7-2, Koganei-shi, Tokyo 184-8584

shioyai@hosei.ac.jp

Takao Miura

Hosei University, Kajino-cho 3-7-2, Koganei-shi, Tokyo 184-8584

miurat@hosei.ac.jp
Table 4.4: The probabilities staying the states in
the case cells = 3, agents = 3 and w = 1 based
on theoretical computation.

state   s = 1         s = 3

(1)     0.00183795    0.002380631
(2)     0.088910183   0.056893048
(3)     0.097921592   0.069179073
(4)     0.108320591   0.106450769
(5)     0.400416751   0.517587349
(6)     0.097921592   0.069179073
(7)     0.005602617   0.012605608
(8)     0.108320591   0.106450769
(9)     0.088910183   0.056893048
(10)    0.00183795    0.002380631

Table 4.5: The means and variances with respect
to the numbers of agents on each cell based on
theoretical computation: cells = 3, agents = 3,
w = 1.

speed s = 1 s = 3
cell 1, mean [m.sub.1]               0.9858363   0.9525033
cell 1, variance [[upsilon].sub.1]   0.3986543   0.3116688
cell 2, mean [m.sub.2]               1.028327    1.094993
cell 2, variance [[upsilon].sub.2]   0.2267854   0.225818
cell 3, mean [m.sub.3]               0.9858363   0.9525033
cell 3, variance [[upsilon].sub.3]   0.3986543   0.3116688
resource, mean                       1           1
resource, variance [upsilon]a        0.4120937   0.3447643

Table 4.6: The means and variances with respect
to the numbers of agents on each cell by
experiments: cells = 3, agents = 3 and w = 1.

speed                s = 1      s = 3

cell 1, mean         0.962867   1.00076
cell 1, variance     0.489426   0.460477
cell 2, mean         1.08247    1.10298
cell 2, variance     0.55276    0.568136
cell 3, mean         0.954667   0.896267
cell 3, variance     0.478662   0.441494
resource, mean       1          1
resource, variance   0.510227   0.49772

Table 5.1: The probabilities staying the states in
the case cells = 3, agents = 3 and w = 1 based
on theoretical computation.

state    s = 1         s = 8

(1)      0.00183795    0.00003929428
(2)      0.088910183   0.00335992308
(3)      0.097921592   0.00428725627
(4)      0.108320591   0.00829115987
(5)      0.400416751   0.96771624388
(6)      0.097921592   0.00428725627
(7)      0.005602617   0.0003284891
(8)      0.108320591   0.00829115987
(9)      0.088910183   0.00335992308
(10)     0.00183795    0.00003929428

Table 5.2: The expected average number of occu-
pied cells by theoretical computation: cells = 3,
agents = 3 and w = 1 .

speed                    s = 1       s = 8

cell 1, mean [m.sub.1]   0.8240108   0.9879811
cell 2, mean [m.sub.2]   0.824081    0.9913469
cell 3, mean [m.sub.3]   0.8240108   0.9879811
resource, average vm     2.472103    2.967309

Table 5.3: The expected average number of
occupied cells by experiments: cells = 3,
agents = 3 and w = 1 .

speed            s = 1     s = 8

cell 1, mean     0.77805   0.9083
cell 2, mean     0.7782    0.8809
cell 3, mean     0.77705   0.8219
resource, mean   2.3328    2.61035
COPYRIGHT 2012 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Author:Shioya, Isamu; Miura, Takao
Publication:International Journal of Digital Information and Wireless Communications
Article Type:Report
Date:Jan 1, 2012
Words:7896
Previous Article:Wireless networks: developments, threats and countermeasures.
Next Article:Vehicle-to-infrastructure communication based on IEEE 802.11g.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters