Printer Friendly

An algorithm of wavelet data compression based on wireless sensor network.

1. Data Compression Algorithm Based on Wireless Sensor Networks

Shows the characteristics of wireless sensor networks, wireless sensor networks in the information transfer process will consume most of the energy node. The introduction of learning automata in wireless sensor networks, data fusion, to wireless sensor networks carrying data fusion can dynamically select the optimal integration path. After the use of multi-mode data compression, data compression in wireless sensor networks, data fusion. Enhance data fusion and data compression rate, extending the life of wireless sensor networks.

Assume that N sensor network nodes [S.sub.1], [S.sub.2], ..., the [S.sub.N] scattered randomly in a L x L rectangular area, used to monitor the particular phenomenon of [xi], such as temperature, humidity, and so on. To indicate the location of each node ([x.sub.i], [y.sub.i]). Each node is detected during the [xi] and the use of multi-hop routing information will be recorded [xi] reported to the base station. A small positive number [epsilon] > 0[absolute value of [[xi].sub.i] - [[xi].sub.j]] < [epsilon] set up, then and are almost equal, [[xi].sub.i] and [[??].sub.i] can be aggregated.

Suppose that in a dense sensor network, the value of the measured [xi][??] in a specific area ([OMEGA] [subset] A) is basically the same, then the node [S.sub.i] for [[??].sub.i] and [[??].sub.i] can be aggregated, [for all][S.sub.j] [member of] [OMEGA]. A following the performance of Y is divided into M regions, respectively [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(1) [absolute value of [[xi].sub.i] - [[xi].sub.j]] < [epsilon]; [for all] [S.sub.i], [S.sub.j] [member of] [[OMEGA].sub.M]; k = 1, 2, ..., M

(2) [[OMEGA].sub.i] [not equal to] [phi] for 1 = 1, 2, ..., M

(3) [[OMEGA].sub.i] [intersection] [[OMEGA].sub.j] [not equal to][phi] for i [not equal to] j

(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Assume that each region [[OMEGA].sub.k] can be approximated by the circular area a ([X.sub.k], [Y.sub.k]) as the center and radius [R.sub.k]. At the same time assuming a different location in the network, [xi] changes over time and is constantly changing, and therefore [[OMEGA].sub.1],[[OMEGA].sub.2], ..., [[OMEGA].sub.m], is not a static area, the location and radius of each region in the life cycle of the wireless sensor network is constantly changing.

[[OMEGA].sub.k] The purpose of the wireless sensor network is the location of the collection of nodes of the cycle, in which [[OMEGA].sub.k] the radius of the area and collect the information [??]. And periodically collected all kinds of information to pass to the node of the network's base station.

After the route discovery phase, each node in the time interval t after the node is passed to the base station.

During the routing phase, each node in a data fusion, learning automata to select the best route to the base station to transmit data packets. Each node through automatic machine learning to find the best neighbor of the information transmitted to the base station.

Learning Automata (LA) associated to the node Si proposed [LA.sub.i][absolute value of the [RL.sub.i]] kinds of action, choose to go to every act of the possibility of probability are set to 1/[RL.sub.i]|. [LA.sub.i] every action corresponds to the corresponding [absolute value of the [RL.sub.i]] in a neighbor used to send information to the base station. When the node S is the active node, and have measured the relevant information, the node will send a signal to bring their own automata, choose one of its neighbor nodes to the measured information to the base station requests automata. Stage in the automatic machine learning routing data to be aggregated. The choice made by the automaton, the return signal reward or punishment, reward or punishment given by returning signal will cause the current routing path is selected, the probability of change will be given by a neighbor node.

For two random movement of nodes [S.sub.i], from its neighbor node receives two different packages; packet of the packet and knowledge. For packet storage should be measured, such as temperature, as well as the location of the information. Knowledge package is made for the packet issued by [S.sub.i] reaction and answer. The information is as follows.

[K.sub.i] represents the number of packets aggregated into this package.

[??] Represents the use of (1) calculated from the aggregated data.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

Where n is the node Si received packet number. ([X.sub.i][K.sub.i], [y.sub.i][K.sub.i]) represented by the formula (2) calculated from the aggregate location information.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

Received packet will be temporarily stored and will be the next activity of the nodes of [S.sub.i].

Node [S.sub.i], N action will result in the following.

[S.sub.i] set [K.sub.i] = 1.

[S.sub.i] perceive the surrounding environment and measurement [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If the next n - 1 action in the new packet is received. So, [S.sub.i] creates a packet, the contents of the packet is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

If a packet was received n -1 action, then each receives a packet, if the formula (3), will be handled as follows: [epsilon]

Is a threshold value represents the maximum difference of the measured data is less than it, the data fusion can

To be executed.

(1) Siusing (4) for data integration

[??] = [[K.sub.i].[??] + [K.sub.j].[??]]/[[K.sub.i] + [K.sub.j]] (4)

(2) Siusing (5) the integration of location information.

[x.sub.i], [K.sub.i] = [[K.sub.i].[x.sub.i], [K.sub.i] + [K.sub.j].[x.sub.j], [K.sub.j]]/[[K.sub.i] + [K.sub.j]] [y.sub.i], [K.sub.i] = [[K.sub.i].[y.sub.i], [K.sub.i] + [K.sub.i].[y.sub.j], [K.sub.j]]/[[K.sub.i] + [K.sub.j]](5)

(3) Set [K.sub.i] = [K.sub.i] + [K.sub.j]

(4) the rate of fusion by the use of (6) calculated data

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

(5) The feedback of the environment as a knowledge packet is sent back to the node of [S.sub.j]. [S.sub.i] creates a data packet, the contents of the packet is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

Ket data to the new collection from node [S.sub.i], if not satisfied (1-1) will be passed to the neighbor K.

Received the knowledge contained in the packet of node [S.sub.i], [LA.sub.i] punish or reward this action. If [DAR.sub.k,j]. is greater than the acceptable efficiency. The action then K will be in accordance with the formula (7) to the reward.

pk(n + 1) = pk(n) + [alpha] . (DARK, i). (1-pk (n)) pl(n + 1) = pl(n) + [alpha] . (DARK, i). pl(n)) (7)

[for all]ll [not equal to] k Learning automata routing collection [LA.sub.i] select the action corresponding to the node, to determine the new aggregate data should be sent to which neighbor. The newly created packets, and pac

Otherwise, the punishment will be in accordance with the formula (1-8).

[P.sub.k](n + 1) = (1 - [beta] . (1 - [DAR.sub.k,i]))[p.sub.k](n)

[P.sub.l](n + 1) = [[beta] . (1 - [DAR.sub.K, i])/-r - 1] + (1 - [beta] . (1 - [DAR.sub.k,i]))[p.sub.l](n) (8)

[for all]ll [not equal to] k If the node [S.sub.i], did not receive any knowledge of the packet from the node [S.sub.k] [LA.sub.i] (8) to punish the selected action. (8) In the value of [DAR.sub.K,i]. by default to 0. Node [S.sub.i] to be re-passed measure to the information, exclude [RL.sub.i] in [S.sub.k] node. [RL.sub.i] in the existence of other neighbor nodes, node [S.sub.i] will it all neighbor nodes issue route request, if a neighbor node of [S.sub.i][S.sub.j] receives the request, and the node [S.sub.i] not in the list [RL.sub.i] node [S.sub.j]. reply to route reply information. Node [S.sub.i] statistical received route reply message, and add the node to itself [RL.sub.i] list. If the node [S.sub.i] and did not receive any route reply, so at this time, this node is already a "dead node", and for the entire wireless sensor network, the network lifetime is over.

Secondary route selection algorithm for the TLA-DA, need to consider when selecting a neighbor node, the change in the routing path the node to be the two nodes of the data fusion. Shown in Figure 1, if you use the basic routing algorithm, node [S.sub.1] will be randomly selected nodes in the same region [S.sub.2], [S.sub.3], [S.sub.4] to send a packet thus transferred to the sensor network base station and it is. Secondary route selection algorithm will select the node [S.sub.4], because the next neighbor node [S.sub.4] to [S.sub.5] is exactly within the domain where also node [S.sub.1] can be better data integration.

The second routing algorithm: any node [S.sub.i] n times in the data fusion process, from its neighbor node [S.sub.j] packet is received by (6) and (7) to calculate the value of [DAR.sub.i, j], and then again by (9) to calculate the E--[DAR.sub.i, j], E--[DAR.sub.i, j] is calculated, will be returned as a node S of feedback to node [S.sub.j]

E--[DAR.sub.i,j] = MAX([DAR.sub.i, j], [MRR.sub.i](n)) (9)

Where [MRR.sub.i] represents a node [S.sub.i] receives the greatest reward. [MRR.sub.i] defined in equation (10):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

Where max values represent the beginning to calculate the max value of this time node [S.sub.i] received all feedback from the wireless sensor networks.

Through the secondary route selection algorithm, the greater the value in the network shown in Figure 1, node [S.sub.2] is MRR obtained by calculating the value node [S.sub.3], [S.sub.4]. [DAR.sub.i,j] for the three nodes is the same node, but the value of [S.sub.2] greater than the value of the other two nodes, so the result when the rate of node [S.sub.1] choose a neighbor node, the node [S.sub.2] is selected to be greater than the node [S.sub.3] [S.sub.4], and the node.

When a node in the same area were unable to find the neighbor nodes for data fusion, as in Figure 1 node [S.sub.5], in the same area were unable to find neighbor nodes for data fusion, the [S.sub.5] will choose a node [S.sub.8], [S.sub.9] maximum residual energy of nodes as data integration neighbors. In the second routing algorithm, in a region boundary node to the node to transfer data outside the region will choose the node with the most remaining energy for data transmission. Secondary route selection algorithm, data fusion in the same region [[OMEGA].sub.k] first consideration is the ratio of the level of data fusion between different regions during the data fusion is the first consideration with the highest residual energy node data fusion.

Second choice routing algorithm, when node Si by the neighbor the [S.sub.j] passed over packet during feedback the same time, learning automata calculated the value of E--[DAR.sub.i,j]. If this aggregate is higher than the acceptable data fusion, then the node [S.sub.i] will be an acceptable learning automata [LA.sub.j] feedback signal. If the adjacent node in a different area, then when the [S.sub.i] feedback to the learning automaton [LA.sub.j], it is necessary to take into account the residual energy of nodes. How the feedback signal is given to decision-making according to the number of the node residual energy. If the current energy surplus ratio of [NRE.sub.i] is higher than the acceptable rate, then back to the L-reward signal, otherwise the feedback signal to [LA.sub.j] punishment

[NRE.sub.i] = [EL.sub.i]/MaxEnergyLevel (11)

Where existing energy [EL.sub.i] on behalf of the sensor nodes, MaxEnergyLeve/ represents the sensor network node with all the energy at the outset.

2. Wavelet multi-mode compression algorithm

Wireless sensor network node in the practical application of the process, each node usually needs to collect different types of information. Often collected by different data, the data of different data by the variation between the data and performance. In this paper, in order to facilitate description of some of the definition of the concept.

Wireless sensor networks by learning automata by domain, when the nodes in the cluster receives the data of neighbors over P and P packets each including measured values of Q different types of measurement information, sonode to collect the amount of information for P * Q receives the P data is divided intoP columns, each column represent the same measurement information signal values, respectively, [X.sub.1], ..., [X.sub.p] represents. First of all, pass the P column of data according to the following algorithm.

  Input: P different types of observation collection ([X.sub.1],
[X.sub.2], ... [X.sub.p]}, the relevance of a threshold [epsilon]
  Output: Handling the collection of the data sequence,
the degree of coupling is greater than [epsilon]
For i from 1 to p
int flag [i] =0
if flag [i]=0
for each [X.sub.i] [member of] {[X.sub.1], ... [X.sub.2],
  ...,[X.sub.p]} do
record corr ([x.sub.i], [x.sub.j])
if (corr ([X.sub.i], [X.sub.j]) > [epsilon])
(// If the data is greater than a given threshold
flag [i]= "Z ", insert (j, arry [i])///J i where the sequence
is inserted into
returnj + 1


The value of the received data, each of which is an array collection. The first item in each collection is the characteristic sequences of other items in the collection of related sequences, then the collection of array processing.

Each set of data on the data processing are strongly correlated, meaning that there is a linear relationship can be used in a straight line representation of data collection between. The straight line is the slope of slash, set the relationship expressed as y = ax + b. The straight line

a = [[L.sub.xy]/[L.sub.x]], b = [bar.y] - [bar.b]x

Where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Sensor networks usually measured in the practical application of the same kind of data, the temporal correlation and spatial correlation, and processing these data, they usually use the wavelet transform, commonly used wavelet transform Harry, Daubechies, Coiflet Symlet, Meyer, Morlet and Mexican Hat. Relatively speaking, Harr wavelet is a wavelet transform in a simple way.Length [2.sup.n] signal [S.sup.n] = {[S.sup.nl]| 0 [less than or equal to]l < [2.sup.n]}, Averaging with details of the application to the a = [S.sub.n, 2l], b = [S.sub.n,2l+1](1 = 0,1, ..., [2.sup.n-l] - 1)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hutchison [S.sub.n - 1, l] = [[S.sub.n,2l] + S[|.sub.n,2l+!]]/2

[d.sub.n-1,l] = [S.sub.n,2l] - [S.sub.n,2l] is multi-level Harr wavelet transform decomposition process and the reconstruction process can be used in Figure 2 and Figure 3.

The Harr algorithm 3 wavelet decomposition of the space --frequency structure shown in Figure 4.

According to the above-described fast Harr wavelet transform wavelet analysis--frequency structure shown in Figure 4. For wavelet transform, wireless sensor networks to collect data transformation, compared to the high-frequency part of most of the coefficient value of approximately 0, most of the energy is concentrated in the low frequency coefficients. Shown in Figure 4, after the initial signal after the wavelet transform broken down into three parts h0h1h2 high-frequency part of the L2 part of the low-frequency part. For the above data after wavelet classification, most of its high-frequency part of the coefficient is 0. RLE is characterized by relatively simple, and easy. RLE compression effect and the average run length and the number of pending compressed data will be a recurring character.

Huffman compression by Huffman coding (Huffman coding) in the fifties of last century, Huffman coding is a kind of variable length coding (VLC). Its high coding efficiency, computational speed is fast, flexible way, Huffman coding is widely used on the application of the Huffman coding such as JPEG.

Huffman coding is based on the probability of the source signal is encoded to appear shorter the higher the probability of symbol for the design of the code word, on the contrary the smaller the probability of the symbol, its corresponding code word is the longer, thus average less yards long. Theoretical studies have shown that the Huffman coding method is close to the entropy of the source.

Binary tree with n nodes, the full binary tree or a complete binary tree with minimum path length, the length of the binary tree with the right path:

WPL = [M.summation over (i = 1)] [W.sub.i][L.sub.i]

Where: n is the number of terminal nodes of a binary tree; the [W.sub.i] is the weight of the i-th terminal node; of Li for the length of the path from the root to the i-th terminal node. Suppose there are n weights [W.sub.1], [W.sub.2], ..., [W.sub.n]}, construct a binary tree of n terminal nodes. Each terminal node of the weight of [W.sub.i]. Obviously this binary tree can be constructed out of trees, which there must be a weighted path length of the smallest binary tree, the tree binary tree called Huffman tree (Huffman), also known as the optimal tree. The right value node in the Huffman tree, the more close to the root node.

The pseudo code of the binary tree constructed as
follows:
void CreateHuffmanTree (HuffmanTree T )
{// Construct the Huffman tree, T [m-1] for its root
an int i, p1, p2;
InitHuffmanTree (T); // T-initialization
InputWeight (T); // input the value of the leaves of the
right to T [0. . weight domain of n-1]
for (i = n; i <m; i + +)
{// N-1 merger, the new node is in turn stored in T [i]
SelectMin (T, i-1 & p1, & p2);
// T [0..i-1] to select the smallest root of two weights,
serial numbers, respectively, p1 and p2
T [p1] parent = T [p2, parent = i;
T [i] .1 child = p1;
// Minimum weight of the root node is the left child of
the new node
T [j]. Rchild = p2; // small right to the root of the right
child of the new node
T [i] weight = T-[p1]. Weight + of T [p2]. Weight;
} // End for


Shown in Figure 5 three binary tree has four leaf nodes, and with the same weights 5, 4, 1, 2. Three binary tree with the right path length, respectively, as follows:

WWPL = 5 x 2 +4 x (2 +1) x 2 +2 x 2 WWPL = 5 x 3 +4 x 3 +1 x 1 +2 x 2 WWPL = 5 x 1 +4 x 2 +1 x 3 +2 x 3

The length of the binary tree c with the right path, so that c is the Huffman number, and shows that the minimum weighted path length of the tree is not necessarily a complete binary tree.

Huffman coding, the Huffman tree is created, agreed to the left branch, said the character "0", the right branch of the character "a". You can use a string of characters from the branch of the root node to leaf nodes in the path as the character encoding of the leaf section. Figure 5 a, b, c, d four nodes can be Huffman coding and encoding of a 0, b encoded as 10, c is encoded as 110, d is encoded as 111. The encoding process shown in Figure 6.

If the source file is aaabbbaaaccddabdd "need l8Byte of storage space, the use of Huffman compressed binary encoding is: 111111010111111 000101010000110110 the just 4Byte + lBit storage space, and Huffman coding of the source file can save nearly 14Byte of storage space.

In terms of data compression for wireless sensor networks, through optimization of the algorithm and the algorithm computation time to save energy consumption in wireless sensor networks, data transmission process is a very big impact. From the compressed data, decompress the reconstruction quality, make the wavelet coefficients of the energy loss are kept to a minimum. Therefore, based on the actual situation on the wavelet transform coefficient of the allocation of the "adaptive" threshold. Harr transform the wavelet data compression based on low frequency data after compression, Huffman coding, under the premise of ensuring data quality, improve the data compression rate.

  Formal language based on a combination of both data
compression method is described as follows:
  Input: P different types of observation collection {X1,
X2,..., Xp}, the correlation threshold.
  Output: After Huffman coding Harr transform the low-
frequency data compression data.
  Corr (Xi, Xj), 1 [less than or equal to] i
  [less than or equal to] p; 1 [less than or equal to] j
  [less than or equal to] p;
  // Calculate the data of the input data sequence
correlation
   if Corr (Xi, Xj) > [epsilon], insert (j, any [i]);
 // Preprocess the data, the data of the data is greater
than [epsilon] is divided into a data sequence
  the if lenth (any [i])> 1, Harr, (any [i]);
 // If the number of elements and process data
collection is greater than 1, the data memory Harr
transform
   The RLE (h0, h1, h2); / RLE / high-frequency
data
   Huffman, (L2); // Huffman coding low
frequency data
  end;


3. Simulation experiments and results analysis

The experiment mimics the six different temperatures, position and movement speed of the external environment. Mobile simulation environment limited the boundaries of wireless sensor networks, when the environment mobile wireless sensor network boundary, then the environment will be toward its original direction of movement in the opposite direction and move in order to ensure that all nodes in a simulation environment . Moving speed of the environment according to [mu] = 0.001, [sigma] = 0.0001 normal distribution random selection. In experiments based on learning automata data aggregation algorithm parameters alpha, beta is set to 0.1, and the acceptable range of compression settings of [??] quilt is set to 5. Q-Learning algorithm of the parameter a is set to 0.1, beta is set to 0.01. Nodes every 10s to the base station sends a data rate of data reception is set to 0.85. In this experiment, called LA-DA algorithm, known as the secondary route selection algorithm based on LA-DA algorithm to improve the algorithm for the TLA-DA algorithm based on learning automata for wireless sensor networks data fusion algorithm.

The packet number of the comparison. Figure 7 shows the experimental simulation environment based on learning automata for wireless sensor networks data fusion algorithm described in this article to the other three algorithms comparison of the number of packets received.

The number of packets shown in Figure 7, with the increase of the number of nodes, four algorithms receiver to an uptrend, but the amount of the received data is based on data fusion algorithm for learning automata with the increase of the number of nodes, data the amount of increase is far less than other nodes.

Figure 8 500-node sensor network when [mu] region [0-0.05] within the region changes, the base station receives the total amount of data comparison.

Shown in Figure 8, the packets and the other three methods when the values of [mu] 0, the received data fusion algorithm based on learning automata is basically no difference in fusion rate. With the increase of the value of [mu], learning automata-based data fusion algorithm received packet is significantly less than the received data packet by using the other three algorithms

Experiments compare wireless sensor network node energy consumption, based on learning automata wireless sensor network data fusion algorithm, the knowledge package in the transfer process also need to consume some energy, knowledge package is consumed in the delivery process energy is also being considered in the overall energy consumption within the scope. After averaging the experimental data found for process-based learning automata for wireless sensor networks data fusion algorithm for packet delivery process, a knowledge-based package of energy consumption accounts for each node packet transmission 30% of the total energy consumption. Figure 9 shows based on learning automata for wireless sensor networks data fusion algorithm, the comparison with the energy consumption of the other three algorithms in the data transfer process.

Shown in Figure 9. Learning automata-based wireless sensor network data fusion algorithm knowledge package is consumed in the process of passing 30 percent of the overall packet delivery, but in the process of passing the overall number of packets less consumption the total energy is less than the other three algorithms.

In order to prove the impact of external environment on the energy consumption in several algorithms, so that p values vary between 0 to 0.5. The energy consumption of the test 500 nodes in this process. The experimental results shown in Figure 10.

Shown in Figure 10. When [mu] = 0, due to the external environment of the network is essentially no change in the energy consumption of the four algorithms are basically the same. With the increase of the value of due to the external environment can transform rapidly month, learning automata-based wireless sensor network data fusion algorithm for the other three algorithms, the energy savings more, reflected in the more obvious advantages.

4. Conclusions

Wireless sensor networks are widely used in medical information gathering, military information collection, environmental protection, monitoring, traffic monitoring, and other fields. Wireless sensor network nodes due to limited energy carried, resulting in limited sensor networks, energy-saving life. In this paper, the wireless sensor networks, data fusion, data compression field, overheating node for the existing network data fusion algorithm, and cannot meet the dynamic changes of the external environment; and existing network data compression algorithm is not satisfied with the problems were analyzed.

Simulation results show that learning automata-based data algorithms for data aggregation, while the location of the node of polymerization; and learning automata response to the dynamic adjustment of the aggregate path for the data in the environment is a high degree of transformation vary greatly improve its data fusion rate. Improved wavelet multi-mode data compression algorithm, which greatly improved the compression ratio of the algorithm, and no loss of accuracy of data, to extend the life of sensor networks. For improved wavelet multimode data compression algorithm, when the data coupling threshold value set is small, the data compression rate increases, the accuracy of the data will fall. This will serve as the direction of future research efforts.

Received: 11 August 2012, Revised 2 October 2012, Accepted 9 October 2012

5. Acknowledgments

The work is supported by Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LY12F03008.

References

[1] Kasirajan, P., Larsen, C. (2010). A New Adaptive Compression Scheme for Data Aggregation in Wireless Sensor Networks. Wireless Communications and Networking Conference. p. 1-6.

[2] Sun LM, Li JZ, Chen Y, Zhu HS. (2005). Wireless Sensor Network. Beijing: Tsinghua University Press. p. 422-431

[3] DARPA Sensor Information Technology Program [D/ OL]: htt://www.Darpa.mil/ito/ research/sensit/index. Html

[4] Savarese, C., Rabaey, J. (2001). Robust positioning algorithms for distributed ad-hoc wireless sensor networks. In: Park Y, ed. Proceedings of the USENIX Technical Annual Conference. Monterey: USENIX, p. 317-328

[5] Ratnasamy, S., Karp, B. (2002). GHT: A geographic hash table for data-centric storage. In: ReghavendrvCS, ed. Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications. New York: ACM Press. p. 94-103.

[6] University of California at Los Angeles. WINS: Wireless integrated network sensors. http://www.janet.ucla.edu/ WINS/biblio.htm

[7] Ganesan, D., Govindan, R., Shenker, S., Estrin, D. (2002). Highly-Resilient, energy-efficient multipath routing in wireless sensor networks. Mobile Computing and Communications Review, 1 (2) 295-298.

[8] Girod, L., Bychkovskiy, V., Elson, J., Estrin, D. (2002). Locating tiny sensors in time and space: A case study. In: Manoli Y, Kim KS, eds. Proceedings of the International Conference on Computer Design. Piscataway: IEEE Press. p. 195-204.

[9] Cerpa, A., Estrin, D. (2002). ASCENT: Adaptive self-configuring sensor networks topologies. In: Kermani P, ed. Proceedings of the 21 st International Annual Joint Conference of the IEEE Computer and Communications Societies. Piscataway: IEEE Press, p.101-111.

[10] Heidemann, J., Silva, F., Intanagonwiwat, C. (2001). Building efficient wireless sensor networks with low level naming. In: Marzullo K, ed. Proceedings of the 18th ACM Symposium on Operating System Principles. New York: ACM Press, p. 146-159.

[II] Intanagonwiwat, C., Govindan, R., Estrin, D., Heidemann, J., Silva, F. (2002). Directed diffusion for wireless sensor networking. ACM/IEEE Transactions on Networking, 11 (1) 2-16.

[12] Liu, J., Cheung, P., Guibas, L., Zhao, F. (2002). A dual-space approach to tracking and sensor management in wireless sensor networks. In: ReghavendrvCS, ed. Proceedings of the ACM International Workshop on Wireless Sensor Networks and Applications. New York: ACM Press, p. 162-173.

[13] Guibas, LJ. (2002). Sensing, tracking, and reasoning with relations. IEEE Signal Processing Magazine, 19 (2) 73-85.

[14] Xie, M. D. (2011). The Survey of Latest Researches on Online Code Dissemination in Wireless Sensor Networks, IEIT Journal of Adaptive & Dynamic Computing. (1) 23-28, Jan. DOI=10.5813/www.ieit-web.org/IJADC/ 2011.1.4

[15]Wu, C. C. (2011). A Study of Synchronous and Bucket Trading Behavior of Institutional Investors, IEIT Journal of Adaptive & Dynamic Computing, (2) 14-25, Apr. DOI=10.5813/www.ieit-web.org/IJADC/2011.2.3

Luo Xiao

School of Information and Engineering

Huzhou Teachers College

Huzhou, China
COPYRIGHT 2013 Digital Information Research Foundation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Xiao, Luo
Publication:Journal of Digital Information Management
Article Type:Report
Date:Feb 1, 2013
Words:5050
Previous Article:Study on current protection in distribution network with distributed generation.
Next Article:The research and realization of wireless energy consumption detector based on the internet of things.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters