Printer Friendly

probabilistic error modeling and topology-based smoothing of indoor localization and tracking data, based on the IEEE 802.15.4a chirp spread spectrum specification.

1. Introduction

Forming situational awareness is typically the main functionality of many context-aware computing platforms and wireless sensor networks [1]. It is often necessary, however, to fuse sensory data with location estimates in order to provide a clear picture of the state of the surrounding environment. Location data are usually provided by a location estimation service (LES), which requires the existence of wireless infrastructure setup in the area, thus forming an integrated localization and tracking platform. The location information can then be utilized in the scope of an overlain location-based service [2] (LBS) that provides additional functionality to the user. The scope of the LBS can influence the design of the indoor localization system based on the variety of requirements for positioning accuracy, scalability, portability, or privacy that are set by the end-users.

A typical indoor localization system may utilize a number of different metrics to infer a location estimate, most commonly time of arrival (ToA) or time difference of arrival (TDoA), angle or direction of arrival (AoA/DoA), and received signal strength (RSS) [3]. The Cricket system [4] is based on time difference of arrival (TDoA) measurements of an RF and an ultrasonic pulse, utilizing a proprietary algorithm to estimate the position of the mobile node. Cricket allows fast deployment of nodes without significant calibration. Studies, however, have shown that when the misalignment angle between transmitter and receiver is increased, the accuracy of this system deteriorates rapidly [5]. Other systems, such as the ToA-based Ubisense real-time location system (RTLS), offer greater accuracy than Cricket but require extensive calibration [6]. Systems based on AoA [3, 7] measurements typically exhibit errors in position estimation due to the possible absence of line of sight (LoS) conditions. Received signal strength (RSS) systems [3, 8-10] similarly suffer from the increased effect of multipath fading [11]. The simplest form of RSS localization is the inference of range based on signal propagation models and RSS measurements. The final position is then estimated through trilateration [12]. This method, however, suffers from decreased accuracy in comparison with ToA trilateration or other RSS-based techniques, such as Location Fingerprinting [13]. According to this method, RSS measurements between the device carried by the mobile user and all present fixed beacons are collected during an "offline" phase. These measurements correspond to a number of points that form a grid that covers the localization area, thus constructing a propagation map. Each time the location of a mobile user is estimated, new RSS measurements are collected and compared to the existing propagation map data. Although Fingerprinting is considered a robust method, the construction of the propagation map during the offline measurement phase is extremely time consuming, making Location Fingerprinting a nonviable solution in case rapid deployment is necessary.

Within this work, the case for chirp spread spectrum (CSS) ToA-based localization and tracking is considered, in order to address the shortcomings of existing systems. The CSS specification was ratified within IEEE 802.15.4a, offering precision ranging and resilience to fading [14]. Furthermore, the ToA-based trilateration algorithm does not require offline measurements, which makes the location estimation service rapidly deployable in any location. The purpose of this work is to assess the CSS specification in terms of its localization and tracking performance capabilities while putting its resilience to the test. It presents a comprehensive study for CSS tracking based on empirical measurements in four different deployments, featuring different propagation conditions. A clear methodology for the data analysis and error modeling process is presented and the underlying distribution is discussed. The geometric dilution of precision (GDoP) effect and its associated metric are also explored, as it is known to affect all trilateration-based systems [15].This study also serves as the motivation for the creation of a topology-based smoothing algorithm that aims to alleviate the effect of GDoP and thus reduce the measured error. The method's viability is assessed by applying it to the existing tracking data and compared to other common smoothing techniques. Specifically, the ratio of the dilution in each separate axis (x- or y-axis) is taken into account, in order to provide an indication of which location measurement component requires more compensation (i.e., the x- or y-coordinate).The individual dilution of precision values for each axis are used for the calculation of weights used by the smoothing algorithm to compensate for the noisy measurements. Since geometric dilution of precision affects all trilateration-based systems, it can be surmised that the proposed smoothing methodology can easily be applied in the context of other indoor localization systems that utilize range measurements to infer the user's position.

2. Materials and Methods

This section provides the necessary definition of the data collection process, including the ranging methodology and deployment topology of the experimental setup. Furthermore, it delineates the exploratory data analysis and error modeling process that was adopted to provide insight into the collected data. The process is described in detail as it takes into account appropriate considerations to provide a meaningful comparison of data collected from different deployments. It also introduces the topology-based smoothing algorithm that aims to further increase system accuracy.

2.1. Network Architecture and Ranging Methodology. The CSS tracking system is based on three main hardware components, namely, the beacons, the wearable tags, and the base station (Figure 1). The beacons are deployed in fixed locations across the area where localization and tracking services are required. The mobile users carry the lightweight tags, which automatically collect ranging measurements with respect to the deployed beacons. The base station receives the individual range measurements and performs position estimation calculations, thus providing a centralized or network-based [3] tracking service, in the sense that it is performed and monitored by a central node rather than by each individual tag.

The main wireless component of the CSS tracking system is the nanoLOC TRX transceiver. This module is present in tags, beacons, and the base station and operates in the 2.4GHz ISM band utilizing chirp spread spectrum (CSS) modulation, with a frequency bandwidth of 80MHz. In CSS, information encoding is based on a frequency-modulated sinusoidal signal with linearly changing frequency, known as a chirp pulse. When the frequency increases the signal is known as an upchirp, whereas when the signal's frequency decreases, it is considered a downchirp pulse [16].

When the base station becomes operational, it starts polling for active tags. The discovered tags immediately start the symmetric double-sided two-way ranging procedure (SDS-TWR). In SDS-TWR [16], the final range measurement is calculated as the mean value of two round trip time of flight (RTToF) measurements. Specifically, the initiating tag sends a localization request to an active beacon. The beacon immediately sends an acknowledgment back to the tag, thus providing the RTToF measurement. Each measurement is then processed to compensate for the processing delay that occurs within the beacons and utilized to obtain the final range measurement. The tags consecutively respond to the base station with the RTToF measurements collected for every available pairing with deployed beacons. After the ranging cycle is over, the base station calculates the positions of the tags by trilateration. When using trilateration for position estimation, the fixed beacons are assumed to represent the center of circles with radii corresponding to their respective range measurements. The intersection of these circles denotes the area wherein the active tag lies. Two-way ranging eliminates the need for extensive calibration, as the participating wireless modules do not require strict synchronization [14, 16].

For a more thorough and detailed description of the hardware and software architecture of the entire localization system, including the User Interface design and the format of data packets, readers are encouraged to refer to [17].The work in [17] focuses on describing the system architecture of an integrated tracking, ambient sensing, and telemetry platform, namely, the enhanced location estimation and centralized tracking (ELECTRA) system. Present work builds on this content and rather focuses on the CSS specification, by providing a detailed analysis of the CSS localization error in multiple additional propagation environments, including the probabilistic modeling of the normalized error and the introduction of a topology-based smoothing algorithm.

2.2. Deployment Topology. CSS-based localization is assessed in terms of the accuracy it provides by real-time collection of data in four different facilities (Figure 1), provided by the Center for Security Studies (KEMEA) in Athens, Greece, and the National Civil Protection Service (MES) in Montana, Bulgaria. The Athens site is a large indoor space, which was cleared from all possible obstructions prior to the experimental procedure so as to study the system's performance in the near best-case scenario for indoor propagation. In Montana, the system was deployed in three sites used by MES as First Responder training grounds. Namely, the International Hotel site is essentially the basement of a derelict building, riddled with debris and concrete, nonreflective obstacles (such as walls, columns, etc.). This site was selected as it offers a chance to study the system's accuracy in partial NLoS conditions, representing a more realistic operational environment. The Neochim Chemical Plant is an indoor industrial space with metallic obstructions (such as tanks) and was selected for the reflective nature of the existing obstructions. Finally, the Erden Tunnel was chosen, as the Tunnel environment illustrates the behavior of a wave guide and can thus offer the chance to study the performance of the system in another unique propagation scenario.

The CSS modules were fully deployed in all four locations, with four beacons positioned to encompass the indoor area (which will henceforth be referred to as the "localization area" or "localization cell"). In the Athens experimental deployment, two sets of measurements were collected: the first one corresponds to points within the localization area (herein referred to as the "Athens cell data" set), whereas the second set of measurements corresponds to points in the periphery of the localization cell (herein referred to as the "Athens periphery data" set). In the Montana International Hotel test bed, measurements were collected only within the localization cell, thus comprising the "Hotel data" set. In the "Neochim Chemical Plant" site, measurements were collected within the localization cell, surrounding the chemical tanks and in the space between them creating the "Chem Plant data" set. In the Erden Tunnel deployment, two beacons were positioned in each end of the Tunnel. The four deployments are collectively illustrated in Figure 1, in a shared scale.

The concept of GDoP is based on the assumption that the measured localization error depends on the error in individual range measurements, which is greatly affected by the positions of the beacons relative to the tag, and thus on the deployment topology. Multiple definitions are available, although in this work the definition in [2] is utilized to generate appropriate values for a variety of points within the four deployed localization cells.

This concept, though simplified, is better illustrated in Figures 2(a) and 2(b). In Figure 2(a) it is shown that the pseudorange measurements from the deployed beacons denote an area wherein the mobile tag is thought to reside. Due to the relative geometry of the network, the precision in the x-axis is not adequate. Therefore, in the case of Figure 2(a) it would be expected to observe larger error in the x-axis, as compared to Figure 2(b) where the geometry of the network allows for less uncertainty.

2.3. Exploratory Data analysis and Error Modeling. Typically, the performance of localization and tracking systems is assessed in terms of the localization error, which is perceived as the Euclidean distance between the real position of the tag and the one reported by the tracking system. The localization error, however, is an extremely volatile metric due to the fact that it can be affected by many factors including deployment topology, deployment size, propagation environment, and frequency of localization system. This is the main reason why comparisons among different systems and deployments are often difficult or irrelevant. Within this work, a data analysis process is introduced that takes this problem into account and aims to provide a way to circumvent some of the known issues allowing, in some measure, performance comparisons of CSS tracking in different propagation environments.

In order to provide a better understanding of CSS performance, especially when comparing different deployments, the normalized localization error of the system is taken into account. Specifically, if the localization error for a known point P is d (measured in m), and the total area of the localization cell is represented by A (measured in [m.sup.2]), then the normalized localization error is calculated as [pi][d.sup.2]/A and is expressed as a dimensionless measure. This metric is especially meaningful because it takes into account the total area of the localization cell and can be used as a first means of comparison between different deployments.

The statistical modeling of the normalized localization error is considered as a required step in order to conclude the analysis of the presented results. By studying the underlying distribution of the normalized localization error, appropriate filtering and compensation techniques can be created, thus improving the overall performance of the system. As a first step of this process, information on the basic characteristics of the Probability Distribution Function and the Cumulative Distribution Function is gathered. It is also necessary to define what the support interval for each distribution is. As illustrated in Figure 1, the tracking beacons were deployed to localization cells that are either rectangular (Athens and Tunnel sites) or convex quadrilateral (Hotel and Chemical Plant sites). Furthermore, they were placed in points that belong to the same plane. In the case of a localization cell of area A, the largest diagonal of length [d.sub.max] is the largest possible value of the localization error [epsilon], corresponding to the worst-case scenario where the tag is located on the one end of the diagonal but is falsely reported to be directly opposite (Figure 3).

Since the beacons are deployed in known positions, the area and the value of [d.sub.max] can be easily computed based on the beacon coordinates. A more generalized approach, however, is herein included. Intuitively, it can be assumed that since the localization error is normalized with respect to the area of the cell, then the normalized localization error [epsilon]norm will not depend on the area of the cell but rather on the topology. For the sake of completion, however, a simple proof is included in this subsection. The following equation holds for the normalized localization error:

[epsilon] [member of] [0,[d.sub.max]] [right arrow] [[epsilon].sub.norm] [member of] [0, [pi][d.sup.2.sub.max]/A]. (1)

In case of a rectangular cell, the Pythagorean theoremwill hold. If r = b/a, (r > 0) is the ratio of lengths of sides b and a (Figure 3(a)), the following equation holds:

[d.sup.2.sub.max] = [a.sup.2] + [b.sup.2] = [a.sup.2] + [r.sup.2] [a.sup.2] [right arrow] [pi][d.sup.2.sub.max]/A

= [pi][a.sup.2](1 + [r.sup.2])/ab [right arrow] [[epsilon].sub.norm] = [pi] 1 + [r.sup.2]/r = [pi](r + 1/r). (2)

It is therefore shown that the support interval for rectangular cells does not depend on the size of the cell but on the ratio of the sides. Equation (3) defines the support interval:

[[epsilon].sub.norm] [member of] [0,[pi](r + 1/r)]. (3)

This equation illustrates that cells that share the same ratio will share the same support regardless of the scale of the cell. For example, if r = 1 and the localization cell is a square, the support interval will always be [0, 2[pi]]. The same conclusion can be drawn similarly, in the case of convex quadrilateral cells.

If [d.sub.max] denotes the largest diagonal, q = r[d.sub.max],(0 < r [less than or equal to] 1) represents the second diagonal and [theta] is the angle between them (Figure 3(b)). Harries' trigonometric formula [18] for the area of a quadrilateral can be utilized to compute the upper bound of the support interval. Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

Therefore, the support interval is defined as

[[epsilon].sub.norm] [member of] [0, 2[pi]/r sin [theta]]. (5)

Equation (5) also illustrates that the upper bound of the support interval depends on the angle between the diagonals and the ratio of their lengths. Thus, it supports the previous assumption that the normalized localization error for cells of the same shape or topology will share the same support interval regardless of the scale or size of the cell. If r = 1 and [theta] = [pi]/2, then the cell is a square and the support interval equals the one previously calculated by (3). It follows that (5) will also hold for special-case quadrilaterals, such as trapezoid cells (when r = 1) and rhomboids (when [theta] = [pi]/2). The case for a concave quadrilateral cell deployment is out of the scope of this work, as such topologies would not efficiently cover the localization area. Equations (3) and (5) show that the normalized error can be used as a means to compare deployments of different sizes, while GDoP can provide insight into which topology shape can be more effective.

Given the support intervals and the empirical CDFs, the next step is to plot the acquired data in a histogram and boxplot. The boxplot [19] (also referred to as a box and whiskers plot) was introduced as a one-dimensional plot of the quartiles of an empirical CDF. Assuming [Q.sub.1] is the first quartile (25th percentile), [Q.sub.2] is the median, and [Q.sub.3] is the third quartile (75th percentile), the Interquartile Range is computed as IQR = [Q.sub.3] - [Q.sub.1]. The Interquartile Range is utilized to compute the upper and lower whiskers of the boxplot. If [M.sub.n] and [M.sub.x] denote the minimum and maximum value of the empirical measurements, respectively, the whiskers [W.sub.upper] and [W.sub.lower] are computed based on the following equations:

[P.sub.lower] = [Q.sub.1] - k * IQR,

[P.sub.upper] = [Q.sub.3] + k * IQR,

[W.sub.lower] = max {[P.sub.lower],[M.sub.n]},

[W.sub.upper] = min {[P.sub.upper],[M.sub.x]}. (6)

The parameter k is known as the maximum whisker length and it is typically chosen to be k = 1.5 [20]. The whiskers are viewed as the boundaries that define the range of Normal operation of the system. Measurements that might exist outside of the interval defined by the whiskers are considered by definition as outliers [20]. The boxplot and histogram are essential tools as they provide an initial impression of the shape of the underlying distribution, which directly affects the choice of modeling process. If the histogram suggests a unimodal distribution (a distribution with a single central point, i.e., the mean value), the boxplot can be used to provide information on its skewness (i.e., if it is symmetrical, left- or right-skewed). Appropriate models can then be chosen, approximating the underlying distribution shape, and then be transformed or truncated to share the same support interval. Typically, the data are first studied with respect to the Normal distribution, which is symmetrical, although that may not be the case after truncation. analytical descriptions of the PDF and CDF of the truncated Normal distribution including the first and second moment already exist [21]. Let [phi](x) and [PHI](x) denote the PDF and CDF of the standard Normal distribution which is supported in (-[infinity], +[infinity]), and the new support interval is [[b.sub.l], [b.sub.u]]; the PDF and CDF of the truncated distribution with mean value [mu] and standard deviation [sigma] are given by the following equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

where [xi] = (x - [mu])/[sigma], [alpha] = ([b.sub.l] - [mu])/[sigma], [beta] = ([b.sub.u] - [mu])/[sigma], and Z = [PHI]([beta]) -[PHI]([alpha]).The lower bound is set to [b.sub.l] = 0 while the upper bound can be derived from(1)-(5) or from the coordinates of the deployed beacons. The error function erf(x) is defined as the probability of a random variable with Normal distribution of mean 0 and variance 1/2 falling in the range [-x, x] and is given by the following equation [21]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

Since the normalized localization error is a positive number, it is appropriate to choose distributions supported in [0, +[infinity]). The lower bound of the truncation stays the same; therefore, it is easy to remove part of their right tails and define the truncated PDF and CDF by renormalization. Specifically, if the initial distribution's PDF is denoted as f(x), the initial distribution CDF is F(x) and the truncated distribution is defined within the interval [0, [b.sub.u]]; it is possible to define the truncated distribution PDF g(x) as follows:

g (x) = f(x)/F([b.sub.u]) - F(0), 0 [less than or equal to] x [less than or equal to] [b.sub.u] [right arrow] g (x) = f (x)/F([b.sub.u]), 0 [less than or equal to] x [less than or equal to] [b.sub.u]. (9)

The CDF of the truncated distribution is given by

G(x) = [[integral].sup.x.sub.0] f (x) dx/F ([b.sub.u]) - F(0) [right arrow] G(x) = F(x)/F([b.sub.u]). (10)

The renormalization in (9) and (10) is necessary so that the truncated CDF G(x) is bounded at the value [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If the histogram reveals the data to be multimodal, the boxplot should not be used as it does not accurately represent the multiple central points of the distribution. In this case, or if truncated distributions fail to model the data accurately, alternative methods can be employed. The data can be modeled either as a convex weighted distribution sum (11) or by using a methodology known as kernel density estimation (12) [22]. Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

P (x) = 1/nh * [n.summation over (i=1)]K(x - [x.sub.i]/h). (12)

A weighted sum of n distributions ([f.sub.i]) with weights [w.sub.i] needs to be convex so that P(x) is a distribution function which integrates to one. Kernel density estimation is an alternative method which approximates the PDF of n samples {[x.sub.1], ... , [x.sub.n]} as the sum of a kernel function K with bandwidth h. The bandwidth controls the scale of the kernel function, which can be any symmetrical function that integrates to one. Common kernel functions [22] include the Normal kernel, uniform kernel, and Epanechnikov kernel. The accuracy of the KDE method depends on the appropriate selection of the kernel function and bandwidth, which is usually a matter of experimentation. Therefore, it could be avoided unless other models fail to represent the experimental data.

2.4. Goodness-of-Fit Testing. In order to fit a specific model to the data, a methodology known as maximum likelihood estimation is employed [23]. MLE is a flexible method that has been shown to be effective when working with specific models or KDE estimation [23, 24].

In the case of truncated distributions, a set of model parameters is given as input to MLE, which provides appropriate modifications after a number of iterations, taking into account the missing tails [23]. The precise methodology for parameter estimation depends on the selected model. If the selected distributions are defined by two parameters at most, the mean value [[mu].sub.e] and standard deviation [[sigma].sub.e] of the empirical measurements can be used to infer the relevant parameters. If a third parameter needs to be estimated, the sample skewness can also be taken into account. In the case of KDE, appropriate adaptations of MLE such as the application of convolution sieves can be utilized and have been shown to provide optimal few-component kernel mixtures [24].

A number of metrics and tests can be used to assess if a specified model fits the data with accuracy or if it should be discarded. Some common metrics and statistics [20, 25] that are used for the evaluation of the goodness-of-fit between the empirical and sample CDFs are the root-mean-squared error (RMS error) between the sample and model distributions, the maximum positive deviation [D.sup.+], the maximum negative deviation [D.sup.-], the Kuiper K-statistic, the KolmogorovSmirnov KS-statistic, and the modified Anderson-Darling [A.sup.2.sub.n] statistic. These metrics were selected, as they are nonparametric, meaning that they do not assume an underlying distribution, with the exception of the Anderson-Darling metric. The [A.sup.2.sub.n] metric is designed to test for normality and thus cannot be classified as a nonparametric test, although appropriate modifications can be made to circumvent this issue. If the sorted empirical measurements are defined as Y = {[y.sub.1], ... , [y.sub.n]} and n is the number of samples, the empirical CDF is M(Y), and the fitted truncated CDF is G(Y), the mentioned metrics are defined as follows (13):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

The RMS error was chosen as it serves as an estimate of the mean "distance" between the empirical and theoretical CDFs, taking into account the negative and positive deviations that might arise. The Kuiper and Kolmogorov-Smirnov statistics are also categorized as supremum tests, in the sense that they offer information on the maximum deviation between the empirical and theoretical CDFs. The Anderson-Darling statistic was taken into account because it places more emphasis on the distribution tails. Furthermore, tables of critical values can be found in many works [26-30], associating the KS and Anderson-Darling statistics with null hypothesis tests such as the Kolmogorov-Smirnov, Lilliefors, and Anderson-Darling tests. By comparing the critical values with the KS and [A.sup.2.sub.n] statistic results, it is possible to formally reject or accept the null hypothesis defined as [H.sub.0]: "the empirical data follow the distribution G(Y) for every value in the interval [0, [b.sub.u]]":

[H.sub.0]: M(Y) [equivalent to] G(Y), [for all]y[member of][0,[b.sub.u]]. (14)

Critical values for the KS and Lilliefors tests are usually generated through the Kolmogorov and Lilliefors distributions while approximations also exist for a large number of samples (n > 35) [30]. Caution is advised; however, as when the parameters of the fitted model have been estimated directly from the data, the KS test critical values are invalid leading to possible false positive results. In this case, modifications can be applied and the KS test can still be used with a different set of critical values [27, 30].

2.5. Smoothing and Compensation. Selected compensation and data smoothing techniques are also applied to the tracking data to determine if considerable improvement of the measured localization error might be achieved. Moving window techniques [31] are usually first explored, such as moving average or moving median. In moving window techniques, an average or median value is calculated for small subsets of the data. The size of the subset is known as the window size and is usually an odd number to facilitate the calculation of the median value. The Kolmogorov-Zurbenko filter is another known technique, which can be easily implemented as a multipass moving average filter [32].

Furthermore, a proof-of-concept topology-based alternative algorithm is herein introduced. The purpose of the smoothing algorithm is to introduce a technique that reduces the localization and tracking errors, especially in terms of the standard deviation. As illustrated in the flowchart in Figure 4, the proposed algorithm receives as input the defined normalized error models extracted by the previous methodologies, the location data, the deployment topology associated dilution of precision values, and the propagation characteristics of the localization area. For example, if a reported position is denoted as P(x, y) and the characteristics of the localization area are known, it is possible to identify the phenomena that contribute most to the underlying localization error and estimate their intensity. This process is considered as the classification phase of the algorithm. During the classification phase, it is possible to choose which distributions to sample from and then apply weights to the samples according to the intensity of each phenomenon. The output of the classification phase is a vector W of convex weights applied to each model with each weight taking values in [0, 1]. A weight is set to zero for an unused model, whereas in the case of a zero-sum classification where one specific model is selected, vector W becomes a vector of "0" values with a weight of "1" applied only to the selected model.

As the next step, it is possible to generate random samples from the defined error distributions by utilizing a method known as inverse transform sampling (ITS) [33]. This method operates on the concept that if a random variable Y is uniformly distributed in the interval [0, 1] and the target distribution CDF is defined as G, then the variable X = [G.sup.-1](Y) will follow the targeted distribution. The function [G.sup.-1](*) denotes the inverse of the CDF and is also known as the quantile function.

The quantile function always exists, as the CDF is by definition unique and monotonic and thus is always invertible. Based on this method, it is possible to construct a nonuniform random number generator following a specified model.

During the classification phase, it is possible to choose which distributions to sample from and then apply weights to the samples according to the intensity of each phenomenon. It is also possible to reject samples that are outside of the boundaries defined by the whiskers of each model. For example, if there exist LoS conditions, a sample P(x, y) is approximately near the periphery of the localization cell, and the GDoP value is over a specific threshold, this model will be considered more significant. The randomly drawn samples relate to the normalized error, [[epsilon].sub.norm]. By reversing the equation for the normalized error, a distance sample, d, is inferred (in meters). The dilution of precision observed for each axis ([x.sub.dop] and [y.sub.dop]) can be used to estimate the amount of compensation that is appropriate for the x- and y-measurement ([d.sub.x] and [d.sub.y]) according to the weights [w.sub.x] and [w.sub.y] as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

The scope of the proposed algorithm is to provide a proof of concept for topology-based compensation and to form the basis for a practical filter implementation that will also take into account the user's motion model. It relies on two important components: (a) the simulated DoP values for the specified deployment and (b) the error models used within ITS, which generates random values for error compensation. SinceDoP is a phenomenon that affects all trilateration-based systems regardless of specification and implementation, this algorithm can be easily applied in the context of other range-based tracking systems, requiring only that a different error model is used as input.

3. Results and Discussion

3.1. Exploratory Data Analysis. Tables 1 and 2 summarize the basic statistics of the measured localization error [epsilon]. In the case of measurements collected within the localization area (i.e., in the Athens and Montana cell data sets), the average localization error was observed to range from 0.70m to 1.14 m. The higher 95th percentile is thought to represent the near-worst-case measurements and was measured at 1.51 m-1.67m. The following figure (Figure 5(a)) represents the cumulative distribution function (CDF) of the empirical localization error data for the Hotel and Athens data sets, while Figure 5(b) illustrates the CDF of the normalized localization results.

Table 1 also includes statistics for the x- and y-axes separately for the Montana deployment. As indicated by these data, errors in the x- and y-axes contribute almost equally to the measured error, an observation that holds true for the Athens cell data set as well. That, however, was not the case in the Athens periphery data set, where it was frequently observed that errors in either the x- or y-axis were significantly increased, thus contributing to a great degradation of accuracy. In this case, the normalized localization error suffers almost a threefold increase in the higher percentiles, as compared to the Athens cell data (Figures 5(a) and 5(b)).

Given that the two data sets were collected in the same environment, there is indication that this can be attributed to the geometric dilution of precision phenomenon. Figure 2(a) can provide a simple explanation; as the mobile tag is situated between two beacons, which is parallel to the y-axis in this example, there is more uncertainty on the x-coordinate measurement. Similarly, uncertainty in either the x- or y-axis would be created as the mobile tag moves along the periphery of the cell, greatly impacting the tracking accuracy. Figure 6 illustrates a simulation of GDoP values for the Athens deployment, which corroborates the claim.

By comparing the Montana cell data with the Athens periphery data, it is safe to conclude that, in the case of CSS time of arrival localization, the effects of GDoP on the periphery of the localization cell can impact the localization accuracy more severely as compared to the effect of NLoS propagation conditions. It also serves as an indication that a good deployment topology that minimizes GDoP is an issue of significant importance.

Table 2 contains the localization error statistics for the Chemical Plant and Tunnel deployments. In the case of the "Chem Plant" test bed, the localization area is of smaller size compared to the "Athens" and "Hotel" deployments. The propagation environment, however, is very different, featuring large reflective obstacles (Figure 1), with size exceeding the signal's wavelength multiple times. The canonical and normalized errors in the Chemical Plant deployment (Figures 7(a) and 7(b)) show that the performance of the system is comparable to the Hotel and Tunnel deployments up to the 75th percentile, where the error is lower than 0.10 (10%). In higher percentiles, the normalized localization error is significantly greater, even reaching 0.50 (50%). The data points that contribute this error correspond to measurements that were taken in close proximity to the metallic tanks in the pathway among them (as illustrated in the schematics in Figure 1). While points in the center of the pathway did not illustrate this behavior, the points within the pathway that were closer to either of the tanks displayed significant errors. Upon closer inspection, it was found that these outliers were sampled in points where the distance between the tag and the tank was very close to the antenna's far field boundary, while the worst-case measurement was sampled in a point where the tank's surface was within the antenna's Fresnel region (also known as the radiative near field). The boundaries for these regions were estimated with respect to the Fraunhofer distance [34], which was calculated to be [d.sub.f] = 2[D.sup.2]/[lambda] = 60.84 cm, for an antenna length of D = 19.5 cm and a wavelength [lambda] = 12.45 cm. Samples taken in points close to the metallic tanks but well after the far field boundary of the antenna did not illustrate such a behavior. Thus, it is surmised that, for the worst-case measurements, it is the combined effects of the reflections due to themetallic surfaces present near and below the Fraunhofer distance boundary, the Fresnel region phenomena (i.e., Fresnel diffraction), and the LoS-restricted placement of the ELECTRA tags that greatly degraded the localization accuracy of the system.

In the Tunnel deployment, the normalized average localization error was measured at 0.008 (0.8%) while the higher percentiles were well below 0.1 (10%). Furthermore, it was observed that the errors were more significant in the x-axis measurements that correspond to the narrowest dimension of the cell, which can again be attributed to the geometric dilution of precision phenomenon.

Figure 8(a) shows how a narrow deployment can affect the precision in the narrowest axis, while Figure 8(b) shows the results of a simulation of the ratio of dilution of precision between the x- and y-axes for the Tunnel deployment (according to the definition in [2]). The simulation in Figure 8(b) clearly shows that the expected error for the x-coordinate should be greater than the error in the y-coordinate and that this effect intensifies near the middle area of the cell, which is corroborated by the collected measurements. According to the empirical measurements and simulation data, it is safe to conclude that narrow width deployments should be avoided as they may result in significant error in the narrowest dimension.

Figure 9(a) provides an example where two outliers are detected at the right tail of the empirical distribution of the Athens cell data, corresponding to the two worst-case measurements. In the case of the "ChemPlant" data set, the three worst-case measurements are considered outliers as well. In Figure 9(b), the boxplot for the periphery data set is also illustrated, where no outliers are detected. This observation also holds for the "Hotel" and "Tunnel" data sets. The values for the support interval bounds, the upper and lower whiskers for all data sets are included in Tables 35, which consolidate the results of the statistical modeling process.

The whiskers analysis provides a better way to compare the different deployments than relying solely on the basic statistics such as the mean value and standard deviation. The results illustrate that the lower boundary of the system's performance stays consistently below 1% of error. The upper boundary varies from approximately 3.5% in benchmark LoS conditions to a maximum of almost 16% in LoS conditions in the periphery of the Athens cell, which is larger than the upper boundary in the NLoS deployments (Hotel, Tunnel, and ChemPlant) signifying the importance of the DoP phenomenon. Furthemore, it is shown in Figure 7(b) that the normalized localization error is fairly consistent among NLoS deployments up to the 75th percentile and that it is the upper 25% of error that mostly contributes to the deviations of the upper bound.

The greatest sources of localization error in the upper 25% of measurements in all data sets were identified to be the effect of GDoP or the existence of reflective obstacles near the antenna's far field boundary. In the first case, the points that contribute larger error, thus making up the upper percentiles of the empirical CDFs, were identified as points that showed large DoP values on either axis or both axes. This observation leads to the assumption that while CSS appears to be resilient to bad propagation conditions up to a certain point, it is not immune to the effects of GDoP due to the use of the trilateration algorithm, which can cause larger deviations in the upper percentiles.

This is especially visible in the case of the Tunnel deployment, as the effect of waveguide propagation appears to be in contention with GDoP. Even though CSS offers a larger frequency bandwidth of 80MHz (when compared to other Spread Spectrum techniques) and waveguide propagation is expected to further improve ranging precision, the results in the Tunnel deployment showed that GDoP significantly affected the measurements in the narrowest dimension. 3.2. Error Modeling. The asymmetric boxplots that were generated (as exemplified in Figures 9(a) and 9(b)) indicate that the empirical measurements are significantly left-skewed, an observation that holds for all data sets. Based on this initial analysis, the Normal, Exponential, and Weibull distributions were selected for the modeling of the normalized localization error, truncated to match the support intervalof the empirical data. The expressions for the truncated Normal distribution have already been given in (7).Thetruncated PDFs andCDFs for the Exponential and Weibull distributions and x [member of] [0,[b.sub.u]] are as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

The Exponential distribution is characterized by the rate parameter [[lambda].sub.e] > 0, while Weibull is characterized by the scale parameter [lambda] > 0 and the shape parameter k > 0. The functions [F.sub.w] and [F.sub.e] denote the canonical CDF functions for the (nontruncated) Weibull and Exponential distributions, respectively. For the Normal distribution, the empirical mean and standard deviation ([[mu].sub.e], [[sigma].sub.e]) were considered as the starting point forMLE. In the case of the Exponential distribution, the [[lambda].sub.e] parameter is computed as [[lambda].sub.e] = 1/[[mu].sub.e]. The mean and the variance of the Weibull distribution are given by the following:

[[mu].sub.w] = [lambda] * [GAMMA](1 + 1/k), (17)

[v.sub.w] = [[sigma].sup.2.sub.w] = [[lambda].sup.2] * [GAMMA](1 + 2/k) - [[mu].sup.2.sub.w]. (18)

By equating the mean and variance with the empirical measurements, we can solve the equations to find the starting values for the shape and scale parameters. Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

Given (19) it is possible to create a lookup table for values of the shape parameter k and the ratio of gamma functions. Based on the empirical mean and variance, the numerical value for k can be easily found within the lookup table. The table was created with decimal precision up to 4 digits and includes values of k in the interval [0, 5], since for k < 5 the Weibull distribution's shape is asymmetrical and left-skewed, resembling the acquired data sets. After a close value for k has been selected from the lookup table, (17) is then utilized to compute the value for the scale parameter [lambda].

The goodness-of-fit of these distributions was assessed based on the metrics and tests defined in Section 2. Furthermore, the following modifications of the Anderson-Darling metric were taken into account [35]:

Modified for nonstandard Normal:

M[A.sup.2.sub.n] = (1 + 4/n - 25/[n.sup.2]) * [A.sup.2.sub.n],

Modified for Exponential: M[A.sup.2.sub.n] = (1 + 0.6/n) * [A.sup.2.sub.n],

Modified for Weibull: M[A.sup.2.sub.n] = (1 + 0.2/[square root of n]) * [A.sup.2.sub.n]. (20)

The following tables (Tables 3-5) include the results of the fitting process including the mentioned statistics, the support interval used to truncate the distributions, and the upper and lower whiskers of each data set. For both data sets collected in the Athens deployment (Table 3), the Weibull distribution consistently offers the minimum RMS error, KS, Kuiper, and Anderson-Darling statistic. The deviation of the truncated Normal distribution is the highest, with the Anderson-Darling statistic being multiple times higher than that of the Exponential and Weibull fits, indicating strong nonnormality of the empirical measurements. Figures 10(a) and 10(b) illustrate the fit for the Weibull and Exponential distributions, respectively, for the "Athens cell" data set. The Weibull distribution appears to fit the data in the lower percentiles of the empirical CDF more accurately than the Exponential distribution. This observation accounts for the superior results of the Weibull fit, especially in the case of the RMS error and Anderson-Darling statistics.

The same observations hold for the "Athens periphery" data set as well, with the Weibull fit achieving better results, especially in the lower percentiles. The following figures (Figures 11(a) and 11(b)) illustrate the fit for the truncated Normal distribution for the Athens data sets. While the deviation in the upper percentiles is not very large, the truncated Normal CDF retains the sigmoid shape of the Normal CDF and deviates significantly in the lower percentiles, which explains why the fit is not as appropriate as Weibull or Exponential. The sigmoid shape still occurs since the lower bound for the truncation was set to zero, which is a lower value than the empirical mean. It also accounts for the large value of the Anderson-Darling statistic. This observation also holds for all data sets.

Tables 4 and 5 include the results for the "Hotel," "ChemPlant," and "Tunnel" data sets. For all sets, the same observations seem to hold, with the Weibull fit achieving better results in terms of RMS error, K, K-S, and AD statistics. The Exponential fit is slightly less accurate, especially in the upper percentiles, while the Normal fit suffers from large deviations in the lower percentiles. Among the tested distributions, the Weibull fit appears to consistently perform more adequately than the Exponential or Normal fit, although the Exponential fit follows closely. The similarity between the two models is explained by their respective PDFs. Both distributions are exponent-based and quite similar. For values of k > 1 near k = 1, the Weibull distribution becomes more and more skewed resembling the Exponential distribution and when k = 1 the Weibull distribution transforms into the Exponential distribution.

Although the Weibull model seems to outperform the Exponential model, null hypothesis tests are also applied to formally compare them and assess if the accuracy of the Weibull model is sufficient. Taking into account the number of samples and selecting a significance level at 99%, the critical values for the KS test are consistently larger than the Weibull KS metric [36-40], therefore accepting the model as sufficient. For example, critical values of 0.162 and 0.188 were used for n = 30 and n = 20, respectively [39]. The Lilliefors test rejects the Exponential model for the Tunnel and ChemPlant data with critical values at 0.229 and 0.279, respectively, assuming the same significance level of 99%. The Lilliefors test is generally considered as a more "strict" test, often discarding cases accepted by KS. This is attributed to the smaller stochastic order of the Lilliefors generated critical values when compared to the KS critical values. Furthermore, the modified Anderson-Darling test for the Weibull distribution also accepts the Weibull model as the critical values for the specific sample sizes and for 99% confidence were [v.sub.crit] > 1 [36], while it rejects the Exponential model. The consistently smaller RMS error further supports the claim that the truncated Weibull distribution is a more appropriate selection for the modeling of the normalized localization error of To A-based localization and tracking based on the chirp spread spectrum specification. Furthermore, the acceptance of the null hypothesis [H.sub.0] (14) where G(Y) is the truncated Weibull distribution by the KS and Anderson-Darling tests for all data sets shows that there is no further need to employ other techniques (such as KDE) to improve the fitted Weibull model and that it can be considered adequate.

3.3. Smoothing and Compensation. Given the apparent impact of GDoP to the tracking data, it was considered necessary to explore ways to reduce the localization error in such a way that the effects of the topology are taken into account. Following the selection of the truncated Weibull model (21), the quantile function can be expressed analytically as follows (with 0 [less than or equal to] p [less than or equal to] 1):

[G.sup.-1] [(p; [lambda], k, [b.sub.u]) = [lambda] * (-ln (1 - [F.sub.w]([b.sub.u]) * p)).sup.1/k]. (21)

Based on ITS, five different random number generators were constructed for all the modeled data sets. The validity of the random number generators was also verified by extensive sanity tests.

Upon closer inspection of the collected tracking data, it was apparent that the location measurements seemed to exhibit biased behavior, in the sense that very similar values for the x- and y-coordinates were always sampled and the variance among the measurements was very small, fact which probably accounts for the inadequacy of rolling window techniques. Furthermore, nearly all measurements seemed to be directionally biased towards the center of the localization cell. Table 6 includes the results for rolling average smoothing with a window size of n = 5measurements. Other moving window methods such as Kolmogorov-Zurbenko (KZ) smoothing are equally ineffective for the same reason, strengthening the case for topology-based smoothing.

The proposed smoothing methodology has been applied to two sets of tracking data, in order to assess its viability. First, the data from the two Athens sets (central and peripheral) were pooled together, totaling 1260 location samples. This pooled data set was selected in order to test the classification stage of the algorithm. Furthermore, the data set from the Tunnel deployment was also selected, as it was shown to suffer greatly from the effects of dilution of precision in the narrowest dimension.

Dilution of precision of each axis was simulated according to the definition in [2]. In the case of the pooled data set and since LoS conditions are always satisfied, only the two models derived from the "Athens periphery" and "Athens cell" data sets were selected during the classification process. Within this simulation, a simplified zero-sum model for the classification process was used, meaning that the algorithm selected one of the two models.

Extensive simulations were used to assess the viability of this method. For each measurement in the pooled and Tunnel data sets, 100 random samples were generated and the DoP threshold was set to equal the mean value of simulated DoP values. Table 6 presents the results for the localization error for the pooled Athens data set in comparison with the results of the proposed algorithm and a simple moving average (MA) filter. Furthermore, it compares the smoothed data to the KZ-filtered data from the Tunnel deployment. The localization error for the pooled and Tunnel data sets is compared to the topology-based smoothing algorithm results in Figures 12(a) and 12(b).

The localization error for the pooled and Tunnel data sets is compared to the smoothing algorithm results in Figures 11(a) and 11(b). According to Figure 12(a), a slight increase in error was observed in the lower 30% of the measurements, which was attributed to overcompensating for the most accurate measurements. In these measurements, the error was increased by a few centimeters. From the 30th percentile upwards, significant improvement was observed, sometimes reaching over 1 m. The 95th percentile of error was improved by 28.05%. This is a significant reduction in error, especially when compared to the results of the rolling average smoothing algorithm. The standard deviation of the localization error was also significantly reduced from 0.74 to 0.44.

In the Tunnel deployment (Figure 12(b)), the results showed that the DoP-based algorithm compensated for errors in the x-axis more aggressively and significantly reduced the tracking error, especially in the upper percentiles. For example, the 95th percentile dropped from 1.77m to 0.46 m. The algorithm reduced the overall error of the system, including the standard deviation (Table 6). This reduction in error is attributed to the increased effect of GDoP to the x-coordinate measurement, especially near the centre of the Tunnel cell where the x-coordinate corrections were more significant (compared to the corrections introduced to the y-coordinate). As expected, the KZ filter did slightly decrease the error but did not significantly impact the bias of the measurements, while the proposed algorithm aimed specifically to reduce it.

Results show that GDoP-based smoothing can successfully compensate for the localization error especially in the higher percentiles, where the effect of GDoP is more significant. The algorithm was applied to the two data sets where the effects of DoPwere most "visible" and in some cases showed significant improvement. It is surmised that as the effects of GDoP increase, the DoP-based corrections perform even better. There is, however, still a risk of overcompensating for accurate measurements, which can be alleviated by calibrating the algorithm to be less aggressive in low-DoP areas.

Concerning real implementation of this algorithm, it is considered feasible as the values for DoP can be precomputed with minimumef fort while the system is being deployed. The sampling process, however, is slightly more demanding in terms of computational resources. In order to further optimize the process and reduce the computational complexity, it is possible to sample directly from the distribution of the weighted sum of random variables. For example, in the case of CSS tracking, supposing two random variables [X.sub.1] and [X.sub.2] (distributed as truncated Weibull) and choosing [w.sub.1] and [w.sub.2] as the respective weights, the random variables [Y.sub.1] = [w.sub.1] [X.sub.1] and [Y.sub.2] = [w.sub.2] [X.sub.2] are also distributed as truncated Weibull variables, renormalized with 1/[w.sub.1] and 1/[w.sub.2] so that the PDF distribution integral equals one. Equation (22) shows the PDF of the distribution of [Y.sub.1] (similar to [Y.sub.2]). Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (22)

The distribution of the random variable that represents the sum Z = [Y.sub.1] + [Y.sub.2] is given by the convolution of their respective PDFs. It is also possible to directly sample the mean value of a large number of samples from [X.sub.1] and [X.sub.2], as for a large number of samples it is expected that the Central Limit Theorem [41] will hold and the mean values will be normally distributed. Therefore, a more generalized smoothing algorithm and its adaptation into a real-time filter can be considered feasible and significant.

4. Conclusions

Within this work, the performance capabilities and resilience of indoor tracking based on the chirp spread spectrum specification are assessed. The results of an extensive measurements campaign are presented and analyzed based on a proposed framework that incorporates exploratory data analysis and probabilistic modeling processes. The geometric dilution of precision phenomenon was also considered in order to analyze the loss in precision in relation to the deployment topology. Tracking data were collected in four environments, each one corresponding to a unique case of signal propagation. Localization accuracy in line of sight conditions was found to be adequate, although a slight decrease in accuracy was observed in the case of NLoS propagation conditions, such as the measurements taken in the Hotel deployment. Simulations of GDoP showed that there were cases where loss of precision was expected, fact that was corroborated by the empirical measurements. The tracking data showed that GDoP affects localization and tracking accuracy, especially in the periphery of a localization cell and in narrow deployments that emerge in Tunnel environments. The tracking accuracy of the system was also evaluated in an industrial environment. CSS localization continuously showed adequate performance, with the exception of a few outliers when reflective surfaces were present in close proximity to the tag's antenna with respect to its far field boundary.

The probabilistic modeling of the normalized localization error followed the basic statistical analysis. The support intervals and whiskers for all data sets were estimated, indicating the boundaries that define the range of Normal operation of CSS time of arrival tracking. While the lower boundary showed more stability, the upper boundary varied, depending on the source of error. The measurements indicated that the underlying distribution was left-skewed and asymmetric.

The Normal, Exponential, and Weibull distributions were selected in an attempt to model the normalized localization error and were truncated to match the support interval of the data. Multiple metrics and statistics were considered to evaluate the goodness-of-fit of the selected distributions, including the RMS error, the Kuiper, Kolmogorov-Smirnov, and Anderson-Darling statistics, and various associated null hypothesis tests. Results indicated that the truncated Weibull distribution achieves a better fit in all the data sets, irrespective of the propagation environment, and can be thought to adequately approximate the underlying distribution of the normalized localization error in the case of user tracking with CSS-based trilateration. The collective results showed that while CSS can be resilient to bad propagation conditions, the trilateration algorithm is still affected by GDoP, which can be a source of error. This observation was the motivation for the introduction of a novel topology-based smoothing algorithm that takes into account the dilution of precision in each axis and the extracted error models. The algorithm was applied to the existing tracking data and compared to usual data smoothing techniques. Results showed apparent bias in the range of measurements, which made traditional smoothing techniques such as moving window algorithms be ineffective. The proposed topology-based smoothing algorithm, however, managed to alleviate the effects of DoP and greatly improved tracking performance, especially in the highest percentiles of error. This method can also be used to improve the accuracy of any trilateration-based system, with the introduction of appropriate error models. Future work includes further validation of the proposed analytical framework with experimental data and the adaptation of the suggested smoothing method towards the practical implementation of a particle filter that will provide the appropriate real-time corrections also taking into account the user's motion model.

http://dx.doi.org/10.1155/2014/310410

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported in part by (a) the "DITSEF" EU research project ("Digital Innovative Technologies for the Security and Efficiency of First Responders" DITSEFFP7ICT-SEC-2007-1-225404); (b) the "TASS" ("Total Airport Security System" TASS-FP7-SEC-2009-2.2.2-241905) EU research project; (c) the "PERSEUS" project ("Protection of European seas and borders through the intelligent use of surveillance" PERSEUS-FP7-SEC-2009-261748); (d) the "AF3" project ("Advanced Forest Fire Fighting" AF3-FP7SEC-2013.4.1-6 607276), all four projects cofunded by the European Commission and the Hellenic General Secretariat of Research and Technology, Ministries of Development and Education; and (e) a Ph.D. Fellowship of NCSR "Demokritos" and the Hellenic Ministries of Development and Education. The authors would like to also acknowledge Mr. George E. Vastianos for the hardware implementation and his assistance during the collection of measurements in the Bulgarian sites.

References

[1] A. K. Dey and G. D. Abowd, "Towards a better understanding of Context and Context Awareness," Tech. Rep. GIT-GVU-99-22, Georgia Institute of Technology, 1999.

[2] A. Kupper, Location-Based Services: Fundamentals and Applications, John Wiley & Sons, 2005.

[3] X. Li, K. Pahlavan, M. Latva-Aho, and M. Ylianttila, "Indoor geolocation using OFDM signals in HIPERLAN/2 wireless LANs," in Proceedings of the 11th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '00), pp. 1449-1453, September 2000.

[4] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan, "Cricket location-support system," in Proceedings of the 6th Annual International Conference on Mobile Computing and Networking (MOBICOM '00), pp. 32-43, Boston, Mass, USA, August 2000.

[5] S. A. Mitilineos, D. M. Kyriazanos, O. E. Segou, J. N. Goufas, and S. C. A. Thomopoulos, "Indoor localization with wireless sensor networks," Progress in Electromagnetics Research, vol. 109, pp. 441-474, 2010.

[6] M. A. Stelios, A. D. Nick, M. T. Effie, K. M. Dimitris, and S. C. A. Thomopoulos, "An indoor localization platform for ambient assisted living using UWB," in Proceedings of the 6th International Conference on Advances in Mobile Computing and Multimedia (MoMM '08), pp. 178-182, November 2008.

[7] I. Amundson, J. Sallai, X. Koutsoukos, and A. Ledeczi, "Mobile sensor waypoint navigation via RF-based angle of arrival localization," International Journal of Distributed Sensor Networks, vol. 2012, Article ID 842107,15 pages, 2012.

[8] J. V. Marti, J. Sales, R. Marin, and E. Jimenez-Ruiz, "Localization of mobile sensors and actuators for intervention in low-visibility conditions: The ZigBee fingerprinting approach," International Journal of Distributed Sensor Networks, vol. 2012, Article ID 951213,10 pages, 2012.

[9] S. A. Mitilineos, J. N. Goufas, O. E. Segou, and S. C. A. Thomopoulos, "WAX-ROOM: an indoor WSN-based localization platform," in Signal Processing, Sensor Fusion, and Target Recognition XIX, Proceedings of SPIE, pp. 1-5, Orlando, Fla, USA, April 2010.

[10] T He, C. Huang, B. M. Blum, J. N. Stankovic, and T F. Abdelzaher, "Range-free localizationand its' impact on large scale sensor networks," ACM Transactions on Embedded ComputingSystems, vol. 4, no. 4, 2005.

[11] T. S. Rappaport, "Mobile radio propagation: small scale fading and multipath," in Wireless Communications: Principles and Practice, Prentice-Hall Communications Engineering and Emerging Technologies Series, Prentice Hall, Upper Saddle River, NJ, USA, 2nd edition, 2002.

[12] D. E. Manolakis, "Efficient solution and performance analysis of 3-D position estimation by trilateration," IEEE Transactions on Aerospace and Electronic Systems, vol. 32, no. 4, pp. 1239-1248, 1996.

[13] K. Kaemarungsi and P Krishnamurthy, "Modeling of indoor positioning systems based on location fingerprinting," in Proceedings of the IEEE 23rd Conference on Computer Communications (INFOCOM 04), pp. 1012-1022, March 2004.

[14] E. Karapistoli, F.-N. Pavlidou, I. Gragopoulos, and I. Tsetsinas, "An overview ofthe IEEE 802.15.4a Standard," IEEE Communications Magazine, vol. 48, no. 1, pp. 47-53, 2010.

[15] E. D. Kaplan, "Performance of stand-alone GPS," in Understanding GPS: Principles and Applications, Artech House, Boston, Mass, USA, 2nd edition, 1996.

[16] NanoLOC TRX TransceiVer, (NA5TR1) User Guide, Ver. 2. 0, Nanotron Technologies, 2008.

[17] G. E. Vastianos, O. E. Segou, K. I. Dimitros, D. M. Kyriazanos, and S. C. A. Thomopoulos, "ELECTRA-X: an enhanced indoor real time localization system with telepresence, telemetry and communication capabilities for usage in first responder operations," in Proceedings of the 8th Security Research Conference (FuSec '13), Berlin, Germany, September 2013.

[18] J. Harries, "Area of a quadrilateral," in Mathematical Gazette, vol. 86, 2002.

[19] J. W Tukey, "Box and whisker plots," in Exploratory Data analysis, Addison-Wesley, 1977

[20] O. Thas, "Graphical tools," in Comparing Distributions, Springer Series in Statistics, Springer, 2010.

[21] N. L. Johnson, S. Kotz, and N. Balakrishnan, "Normal distributions," in Continuous Univariate Distributions, vol. 1, John Wiley and Sons, New York, NY, USA, 1994.

[22] A. W. Bowman and A. Azzalini, Applied Smoothing Techniques for Data analysis, Oxford University Press, New York, NY, USA, 1997.

[23] M. Hazewinkel, "Maximum-likelihood method," in Encyclopedia of Mathematics, Springer, 2001.

[24] M. C. Jones and D. A. Henderson, "Maximum likelihood kernel density estimation: on the potential of convolution sieves," Computational Statistics and Data analysis, vol. 53, no. 10, pp. 3726-3733, 2009.

[25] W. J. Conover, Practical Nonparametric Statistics, Section 6. 1, John Wiley and Sons, New York, NY, USA, 3rd edition, 1999.

[26] H. W. Lilliefors, "Onthekolmogorov-smirnovtestfortheexponential distribution with mean unknown," Journal ofAmerican Statistical Association, vol. 64, no. 318, pp. 387-389,1969.

[27] E. S. Pearson and H. O. Hartley, Biometrika Tables for Statisticians, vol. 2, Cambridge University Press, 1972.

[28] N. Smirnov, "Table for estimating the goodness of fit of empirical distributions," Annals of Mathematical Statistics, vol. 19, pp. 279-281, 1948.

[29] G. R. Shorack and J. A. Wellner, Empirical Processes with Applications to Statistics, John Wiley & Sons, 1986.

[30] L. Sachs, Angewandte Statistik, Springer, 1997.

[31] G. R. Arce, Nonlinear Signal Processing: A Statistical Approach, John Wiley & Sons, New Jersey, NJ, USA, 2005.

[32] I. Zurbenko, The Spectral analysis of Time Series, North-Holland Series in Statistics and Probability, 1986.

[33] L. Devroye, "Section 2. 2 inversion by numerical solution," in Non-Uniform Random Variate Generation, Springer, New York, NY, USA, 1st edition, 1986.

[34] C. A. Balanis, "Fundamental parameters of antennas," in Antenna Theory: analysis and Design, John Wiley and Sons, New York, NY, USA, 3rd edition, 2005.

[35] M. A. Stephens, "EDF statistics for goodness of fit and some comparisons," Journal of the American Statistical Association, vol. 69, no. 347, pp. 730-737,1974.

[36] B. Dodson, The Weibull analysis Handbook, American Society for Quality, 2nd edition, 2006.

[37] B. W. Woodruff, A. H. Moore, E. J. Dunne, and R. Cortes, "A modified kolmogorov-smirnov test for weibull distributions with unknown location and scale parameters," IEEE Transactions on Reliability, vol. R-32, no. 2, pp. 209-213,1983.

[38] R. L. Edgeman and R. C. Scott, "Lilliefors's tests for transformed variables," Brazilian Journal of Probability and Statistics, vol. 1, pp. 101-112,1987.

[39] H. J. Khamis, "The d-corrected Kolmogorov-Smirnov test for the two-parameter Weibull distribution," Journal of Applied Statistics, vol. 24, no. 3, pp. 301-317,1997

[40] D. T. O'Connor and A. Kleyner, Practical Reliability Engineering, John Wiley & Sons, 2012.

[41] A. Papoulis, "Central limit theorem," in Probability, Random Variables and Stochastic Processes, McGraw-Hill, 3rd edition, 1991.

Olga E. Segou and Stelios C. A. Thomopoulos Integrated Systems Laboratory, Institute of Informatics and Telecommunications, National Center for Scientific Research "Demokritos," Patriarchou Grigoriou 1 and Neapoleos Street, Aghia Paraskevi, 15310 Athens, Greece

Correspondence should be addressed to Olga E. Segou; osegou@iit.demokritos.gr

Received 6 December 2013; Revised 9 May 2014; Accepted 10 May 2014; Published 12 June 2014

Academic Editor: Shuai Li

TABLE 1: Localization error for the Athens and Hotel deployments.

Statistic                Athens          Athens          Hotel
                       (cell data)   (periph. data)   (cell data)

Area of localization       200            200           267.87
cell ([m.sup.2])

Number of points           37              26             20

Number of samples          740            520             400

Average error (m)         0.70            1.43           1.14

Standard deviation        0.42            0.88           0.46

Median error (m)          0.59            1.31           1.12

67th percentile (m)       0.72            1.91           1.47

95th percentile (m)       1.51            3.05           1.67

Normal average            0.010          0.032           0.015
localization error

Statistic                 Hotel          Hotel
                        (x-axis)        (y-axis)

Area of localization     267.87          267.87
cell ([m.sup.2])

Number of points           20              20

Number of samples          400            400

Average error (m)         0.72            0.72

Standard deviation        0.41            0.54

Median error (m)          0.76            0.54

67th percentile (m)       0.89            1.08

95th percentile (m)       1.46            1.57

Normal average             N/A            N/A
localization error

TABLE 2: Localization error for the Chemical Plant and
Tunnel deployments.

Statistic               ChemPlant   ChemPlant   ChemPlant
                          data      (x-axis)    (y-axis)

Area of localization.    101.76      101.76      101.76
cell ([m.sup.2])

Number of points           20          20          20

Number of samples          400         400         400

Average error (m)         1.12        0.63        0.70

Standard deviation        0.95        0.46        0.77

Median error (m)          0.75        0.43        0.46

67th percentile (m)       1.10        0.70        0.68

95th percentile (m)       3.45        1.58        2.52

Normal average            0.038        N/A         N/A
localization error

Statistic                Tunnel      Tunnel      Tunnel
                          data      (x-axis)    (y-axis)

Area of localization.      180         180         180
cell ([m.sup.2])

Number of points           30          30          30

Number of samples          600         600         600

Average error (m)         0.95        0.78        0.40

Standard deviation        0.52        0.55        0.33

Median error (m)          0.85        0.55        0.34

67th percentile (m)       1.31        1.19        0.46

95th percentile (m)       1.77        1.63        1.11

Normal average            0.016        N/A         N/A
localization error

TABLE 3: Fitting results for the Athens data sets.

Statistic     Athens cell        Athens cell         Athens cell
              data Normal fit    data Exponential    data Weibull
                                 fit                 fit

Support       [0, 78539]         [0, 78539]          [0, 7.8539]
interval

Upper         3.6102             3.6102              3.6102
whisker (%)

Lower         0.2320             0.2320              0.2320
whisker (%)

RMS error     0.1846             0.0657              0.0486

[D.sup.+]     0.4076             0.1551              0.1237

[D.sup.-]     0.0599             0.0703              0.0656

Kuiper        0.4675             0.2254              0.1893

Kolmogorov-   0.4076             0.1551              0.1237
Smirnov

Modified      58.4965            1.2054              0.6483
Anderson-
Darling

Statistic     Athens         Athens         Athens
              periphery      periphery      periphery
              Data Normal    data           data
              fit            Exponential    Weibull fit
                             fit

Support       [0, 7.8539]    [0, 78539]     [0, 7.8539]
interval

Upper         15.7370        15.7370        15.7370
whisker (%)

Lower         0.0723         0.0723         0.0723
whisker (%)

RMS error     0.1522         0.0471         0.0337

[D.sup.+]     0.3246         0.1694         0.1150

[D.sup.-]     0.0247         0.0511         0.0638

Kuiper        0.3493         0.2205         0.1788

Kolmogorov-   0.3246         0.1694         0.1150
Smirnov

Modified      40.5796        0.9177         0.4190
Anderson-
Darling

TABLE 4: Fitting results for the Hotel and
Chemical Plant deployments.

Statistic     Hotel data   Hotel data    Hotel data
              Normal fit   Exponential   Weibull fit
                               fit

Support       [0, 25.31]   [0, 25.31]    [0, 25.31]
interval

Upper           3.9500       3.9500        3.9500
whisker (%)

Lower           0.0501       0.0501        0.0501
whisker (%)

RMS error       0.0971       0.0738        0.0593

[D.sup.+]       0.2111       0.1063        0.0872

[D.sup.-]       0.0842       0.0963        0.0875

Kuiper          0.2953       0.2027        0.1747

Kolmogorov-     0.2111       0.1063        0.0875
Smirnov

Modified       24.6208       0.7664        0.4973
Anderson-
Darling

Statistic     ChemPlant     ChemPlant     ChemPlant
              Normal fit   Exponential   Weibull fit
                               fit

Support       [0, 7.37]     [0, 737]      [0, 737]
interval

Upper          11.9611       11.9611       11.9611
whisker (%)

Lower           0.1380       0.1380        0.1380
whisker (%)

RMS error       0.2384       0.0928        0.0474

[D.sup.+]       0.5833       0.3048        0.1587

[D.sup.-]       0.0194       0.0306        0.0799

Kuiper          0.6027       0.3354        0.2386

Kolmogorov-     0.5833       0.3048        0.1587
Smirnov

Modified       43.9807       3.1102        0.7784
Anderson-
Darling

TABLE 5: Fitting results for the Tunnel deployment.

Statistic             Tunnel         Tunnel         Tunnel
                    Normal fit    Exponential    Weibull fit
                                      fit

Support interval     [0, 4733]     [0, 47.33]     [0, 47.33]

Upper whisker (%)     8.4036         8.4036         8.4036

Lower whisker (%)     0.0109         0.0109         0.0109

RMS error             0.1388         0.0558         0.0531

[D.sup.+]             0.3037         0.1204         0.1011

[D.sup.-]             0.0171         0.1019         0.1037

Kuiper                0.3208         0.2223         0.2048

Kolmogorov-           0.3037         0.1204         0.1037
Smirnov

Modified              42.5120        0.6599         0.5699
Anderson
Darling

TABLE 6: Smoothing and compensation results.

Statistic             Pooled Athens      Smoothed       MA
                         data set      Athens data    Athens

Average error (m)          1.00            0.71        1.00

Standard deviation         0.75            0.44        0.74

Median error (m)           0.68            0.53        0.71

67th percentile (m)        1.18            0.63        1.20

95th percentile (m)        2.51            1.81        2.49

Normal average            0.0158          0.0081      0.0157
localization error

Statistic                   KZ           Smoothed
                          Tunnel          Tunnel

Average error (m)          0.90            0.30

Standard deviation         0.47            0.10

Median error (m)           0.75            0.27

67th percentile (m)        0.92            0.37

95th percentile (m)        1.70            0.46

Normal average            0.0142          0.0016
localization error
COPYRIGHT 2014 Sage Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Segou, Olga E.; Thomopoulos, Stelios C.A.
Publication:International Journal of Distributed Sensor Networks
Article Type:Report
Date:Jan 1, 2014
Words:11626
Previous Article:Achieving network localizability in nonlocalizable WSN with moving passive event.
Next Article:A three-dimensional OFDM system with PAPR reduction method for wireless sensor networks.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters