# Fault-Tolerant Optical Flow Sensor/SINS Integrated Navigation Scheme for MAV in a GPS-Denied Environment.

1. IntroductionIn recent years, unmanned air vehicles have been widely used in military and civil fields. State-of-the-art navigation schemes for unmanned aerial vehicles (UAVs) typically rely on the high-quality global position system (GPS). However, the GPS signal is not always available (indoor places, for example), so the overreliance on GPS is becoming a prominent insufficiency of UAVs [1]. As a result, the increasing interest in navigation for a GPS-denied environment has heightened the need for novel GPS-free integrated navigation schemes. Inspired by flying insects (e.g., honeybees) that have the ability to fly with great agility, scientists hope that micro air vehicles (MAVs) could fly like the insects without the aid of GPS signals [2]. Experiments show that one of the navigation information sources is measuring the moving velocity of the image in the world appearing in the eye when honeybees fly to their destination [3]. Researchers think that what the honeybees are mainly relying on is the apparent motion of the objects in their field of view, namely, optical flow, which contains egomotion information relative to the environment of the flying insects [4].

On the other hand, the requirement for more self-reliant (autonomous) navigation systems and the need for MAV with a greater understanding of their environment are becoming more and more urgent. It is obvious that insects like honeybees, even with their small brains and limited intelligence, have the accurately autonomous navigation ability described above. Accordingly, bioinspired optical flow sensors are developed with the advantage of smaller size, lighter weight, low power requirement, higher frequency, and lower cost compared to other equipments such as lidar, radar, and magnetometer [5].

Chahl [6] has also pointed out that the largest development opportunities may exist in small and micro UAV domains as a result of the novelty of aerospace engineering on a small scale. Therefore, autonomous navigation or pose estimation using optical flow sensors is valuable for MAVs. This has motivated many researches into optical-flow-based MAV navigation systems and algorithms.

Ruffier and Franceschini [7] have developed an optical flow regulation loop-based MAV which is able to take off, cruise, and land automatically. The MAV can also react appropriately to wind disturbances. This MAV can keep the downward optical flow at a constant value. Furthermore, the two-degree-of-freedom (DOF) tethered MAV was shown to land safely on a platform set into motion along two directions, vertical and horizontal, without ground height or groundspeed information [8]. The results show that their MAV works very well. However, the MAV has only two DOF, which is a little far from the 6 DOF practical MAV.

As stated earlier, the optical flow contains the egomotion information of honeybees or MAVs. Consequently, it is a reasonable navigation frame that combines the optical flow and the inertial navigation system (INS) measurement information in order to improve the navigation precision.

Sabo et al. [9] performed an approach for bioinspired navigation using optical flow. Hallway navigation results show that the roll, pitch, and thrust can be tuned and controlled as expected. The deficiency is that the optical flow was calculated off-board using a computer vision algorithm, rather than taking advantage of optical flow sensors with small computation and short response time.

Zhang et al. [10] fused strapdown inertial navigation system (SINS) information with optical flow through a Kalman filter for correcting the SINS attitude when it is diverged. Simulation results show that the estimated vehicle attitude is of good performance with smaller error. Nevertheless, the placement scheme of optical flow sensors is not shown in this paper. In addition, it is considered that the pitch angle is sinusoidal, yet the roll angle and yaw angle are all zero, which is a little far from the practical situation.

In order to overcome the drift of INS, Shen et al. [11] designed an INS/optical flow/magnetometer integrated navigation scheme for a GPS-denied environment. The gyro, accelerator, and magnetometer information is properly fused for estimating the attitude of the MAV. The results show that the proposed scheme can effectively reduce the errors of navigation parameters and improve navigation precision. However, too many devices are involved in navigation, and the corresponding functions can be realized only by fusing optical flow and inertial information.

Lyu et al. [12] designed a fault-tolerant filter based on optical flow and inertial measurements. The velocity estimation accuracy is greatly improved. In want of perfection, the utilization of optical flow information needs to be further improved, not only for estimating velocity.

In this paper, a multiple optical flow sensor/SINS onboard integrated navigation scheme for a GPS-denied environment is proposed. Multioptical flow sensors are fixed on the given MAV, referencing an optimal placement scheme in order to make the optical flow sensors as sensitive as possible to the motion state change of a MAV. Considering that the MAV may make a big maneuverer so that some optical flow sensors may face the sky, which causes the failure of optical flow sensors, an adaptive decentralized filter with fault detection and isolation ability is therefore developed to conquer this problem. 6 DOF flight simulations are carried out for validating the effectiveness of the navigation algorithm.

The rest of this paper is organized as follows. Section 2 introduces the mathematical model of optical flow sensors. The optical flow sensor/SINS integrated navigation algorithm based on the extended Kalman filter (EKF) is described in Section 3. Section 4 gives the optimal placement scheme of multiple optical follow sensors fixed on a MAV. Section 5 presents an application of the centralized EKF. A faulttolerant integrated navigation scheme is proposed in Section 6. In the last section, the author arrives at the conclusion that the fault-tolerant optical flow sensor/SINS integrated navigation algorithm is effective.

2. Optical Flow Sensor Model

The magnitude of optical flow is defined as the angular rate at which a point on an object point moves relative to an optical flow sensor. The perspective projection model of the optical flow sensor [10, 11] is illustrated in Figure 1.

2.1. Optical Flow Definition. Let [P.sub.G] = [[[x.sub.G], [y.sub.G], [z.sub.G]].sup.T] be a point on the ground in the three-dimensional optical flow sensor reference coordinate frame [S.sub.OF]([o.sub.OF][x.sub.OF][y.sub.OF][z.sub.OF]). Let the optical axis of the optical flow sensor be the Z-axis of the optical flow sensor coordinate system. f is the focal length and the origin [o.sub.OF] is the centre of projection. W and H are the pixel numbers of the optical flow sensor image plane along the [x.sub.OF] and [y.sub.OF] directions, respectively. The projection of point [P.sub.G] on the image plane is given by

p = f[P.sub.G]/[Z.sub.G], (1)

Point [P.sub.G] has the following relative motion to the optical flow sensor under the sensor coordinate system [S.sub.OF]:

[V.sub.G] = -[T.sub.G] - [omega] x [P.sub.G], (2)

where [omega] is the angular velocity and [T.sub.G] the translational component of the motion relative to the optical flow sensor.

Let (2) be expanded in three dimensions; we get the following equation:

[mathematical expression not reproducible], (3)

Taking the derivative of both sides of (1) with respect to time, the relation between the velocity of [P.sub.G] in the optical flow sensor coordinate system and the velocity or the optical flow of p in the image plane [13] is obtained.

Flow/[DELTA]time [??] v = f [V.sub.G][z.sub.g]-[P.sub.G][V.sub.z]/[z.sup.2.sub.G]. (4)

Let (4) be expressed in x and y components, we get

[v.sub.x] = f[V.sub.Gx]/[z.sub.G] - f[V.sub.Gz]-[z.sub.G]/[z.sup.2.sub.G], (5)

[v.sub.y] = f[V.sub.Gy]/[z.sub.G] - f[V.sub.Gz][y.sub.G]/[z.sup.2.sub.G]. (6)

Substituting (3) into (5) and (6), the following equations are achieved:

[mathematical expression not reproducible], (7)

where [v.sub.x] and [v.sub.y] are the optical flow components on directions x and y, which can be calculated by a matching algorithm, such as the sum of absolute differences (SAD).

When an optical flow sensor is mounted on a MAV flying in a three-dimensional space, the optical flow is a function of the MAV motion state (translation states [v.sub.x], [v.sub.y], [v.sub.z] and rotation states [[omega].sub.x], [[omega].sub.y], [[omega].sub.z]) and the distance d from the optical flow sensor to the object point, as shown in Figure 2.

In order to make the optical flow sensor work properly, the object plane should have adequate texture, and it should be sufficiently illuminated for the sensor to distinguish and track features. The optical flow sensor measures only two quantities which are the x and components of the mean opticalflow within its field of view. The local optical flow vector [??] in the viewing direction [??] (i.e., the direction of the optical axis) experienced by the optical flow sensor at velocity [??] and rotational rate [??] is [14]

[mathematical expression not reproducible], (8)

where d is the distance between the object point and the optical flow sensor in direction [??].

For a simple example, Figure 2 depicts an optical flow sensor moving relative to its object plane at a right velocity v, a height above the ground h, and a yaw rate [omega]. The angle between the viewing direction (the direction of the optical axis) of the optical flow sensor and the plumb line is [theta]. The optical flow [??] detected by the optical flow sensor in the plane formed by the optical axis and the velocity vector is [5]

[??] = (v cos[theta]/h/cos [theta] + [omega]) = (v [cos.sup.2][theta]/h + [omega]). (9)

The physical unit of OF in (8) and (9) is "rad/s".

2.2. Reference Coordinate Systems. In order to describe the 6 DOF motions of a blended wing body [15] MAV with multiple optical flow sensors in three-dimensional space, three reference coordinate systems as can be seen in Figure 3 [16] are defined.

(i) Navigation coordinate system ([S.sub.n] - [o.sub.n][x.sub.n][y.sub.n][z.sub.n]): a local ENU (east-north-up) frame fixed to a world is selected as the navigation frame. Absolute motion states of the MAV are defined in this frame. If the rotation of the Earth is neglected, the ENU reference coordinate system can be considered as an inertial coordinate system.

(ii) MAV body coordinate system ([S.sub.b] - [o.sub.b][x.sub.b][y.sub.b][z.sub.b]): a coordinate system fixed on the MAV body. Its origin is the centre of gravity of the MAV body. The Y-axis is parallel to the longitudinal axis of the MAV body and points forward. The Z-axis points upward.

(iii) Optical flow sensor coordinate system ([S.sub.OF] - [o.sub.OF] [x.sub.OF][y.sub.OF][z.sub.OF]): a coordinate system fixed to a optical flow sensor. Its origin is the focus of the optical flow sensor. The Z-axis is the optical axis of the optical flow sensor. The X-axis and Y-axis are parallel to the optical flow sensor's X-axis and Y-axis, respectively.

The transformation matrix from the MAV body coordinate system [S.sub.b] to the optical flow sensor coordinate system [s.sub.OF] is

[mathematical expression not reproducible], (10)

where [mu] and [eta] are the mounting angles of the optical flow sensor fixed on the MAV. Namely, [mu] is the angle around the [x.sub.b] axis for rotating [S.sub.b] to [S.sub.OF], and then [eta] is the angle around the [y.sub.b] axis.

Thus, the transformation matrix from the local coordinate system [S.sub.n] to the optical flow sensor coordinate system [S.sub.OF] is

[C.sup.OF.sub.n] = [C.sup.OF.sub.b][C.sup.b.sub.n], (11)

where [C.sup.b.sub.n] is the transformation matrix from the local coordinate system [S.sub.n] to the MAV body coordinate system [S.sub.b].

2.3. Expression of Optical Flow in Three-Dimensional Space. As the measurements of optical flow sensors are based on the optical flow sensor coordinate system [S.sub.OF], the motion states of the MAV in the inertial coordinate system [S.sub.n] need to be projected onto the optical flow sensor coordinate system [S.sub.OF].

Therefore, based on the coordinate systems defined in the above section, the optical flow experienced by an optical flow sensor in Figure 3 is

[mathematical expression not reproducible], (12)

where [OF.sub.x] and [OF.sub.y] are the x and y component magnitude of

[??] defined in (8), which are the measurements of the optical flow sensor. [V.sub.nOF] and [[omega].sub.nOF] are the velocity and rotation rate of the optical flow sensor in the inertial frame [S.sub.n], respectively. Subscripts OF, x and OF, y denote the x and y components in the optical flow sensor coordinate system [S.sub.OF], respectively. [d.sub.OFg] is the distance along [z.sub.OF] from the lens centre of the optical flow sensor to the ground.

Next, we need to find the relationship between the optical flow measurements in the optical flow sensor coordinate system SOF and the motion state of the aircraft in the inertial coordinate system [S.sub.n].

First, the velocity vector of the optical flow sensor in the inertial frame [S.sub.n] can be expressed as

[V.sub.nOF] = [dr.sub.nOF]/dt = d/dt([r.sub.nb]+[r.sub.bOF]) = [dr.sub.nb]/dt + [dr.sub.bOF]/dt. (13)

The projection of [V.sub.nOF] in the [S.sub.OF] coordinate system is

[mathematical expression not reproducible], (14)

As mentioned earlier, the navigation coordinate system [S.sub.n] can be considered as an inertial coordinate system, when the rotation of the Earth is neglected. Then, it should be noted that [([[omega].sub.nb]).sub.b] is indeed the rotational angular velocity of the MAV in the inertial reference system, which can be directly measured by SINS.

Second, the direction vector of [z.sub.OF] is defined as [k.sub.OF], which means that [([k.sub.OF]).sub.OF] = [[0 0 1].sup.T]. Thus, the projection of [k.sub.OF] to [S.sub.n] is

[([k.sub.OF]).sub.n] = [C.sup.n.sub.OF][([k.sub.OF]).sub.OF] = [C.sup.n.sub.b][C.sup.b.sub.OF][([k.sub.OF]).sub.OF]. (15)

Therefore, the cosine of the angle between [z.sub.OF] and [y.sub.n] is -[([k.sub.OF]).sub.n,z,] and

[([k.sub.OF]).sub.n,z] = [C.sub.13] cos [eta] sin [mu] - [C.sub.23] sin [eta] + [C.sub.33] cos [eta] cos [mu], (16)

where [C.sub.ij] is the corresponding element of [C.sup.b.sub.n]. (16)

The distance along the axis [z.sub.OF] from the origin of the optical flow sensor coordinate system [S.sub.OF] (i.e., the focus of the optical flow sensor) to the ground is

[mathematical expression not reproducible]. (17)

Third, the rotational angular velocity of the optical flow sensor in the optical flow sensor coordinate system is

[([[omega].sub.nOF]).sub.OF] = [([[omega].sub.nb]).sub.OF] = [C.sup.OF.sub.b][([[omega].sub.nb]).sub.b]. (18)

Finally, substituting (14), (17), and (18) into (12), the measurements of an optical flow sensor are

[mathematical expression not reproducible]. (19)

The measurement noise of [OF.sub.OF] in (19) is modelled by a white noise vector with zero mean in the following simulation, whose variance matrix is R = diag [[r.sup.2.sub.OF] [r.sup.2.sub.OF], where [r.sub.OF] = 0.001 rad/s.

3. Information Fusion Using EKF

In order to improve the precision of SINS, the navigation errors are taken as the state vector in this study instead of navigation parameters. This is because the navigation errors are much smaller than the value of the navigation parameters, and the change of navigation errors is slow. Therefore, the state transmission of navigation errors can be expressed accurately with linear equations, and the estimation precision of the state can be easily guaranteed.

The state vector is taken as [17]

X = [[[delta]L, [delta][lambda], [delta]h, [delta][V.sub.E], [delta][V.sub.U], [[empty set].sub.E], [[empty set].sub.N], [[empty set].sub.U], [[epsilon].sub.cx], [[epsilon].sub.cy], [[epsilon].sub.cz], [[epsilon].sub.rx], [[epsilon].sub.ry], [[epsilon].sub.rz], [[nabla].sub.x], [[nabla].sub.y], [[nabla].sub.z]].sup.T], (20)

where the subscripts E, N, and U stand for the components in the corresponding axis of the east-north-up local coordinate system [S.sub.n]; x, y, and z stand for the components in the corresponding axis of the MAV body coordinate system [S.sub.b]; [delta]L, [delta] [lambda], [delta]h denote the position errors of SINS; [delta][V.sub.E], [delta][V.sub.N], [delta][V.sub.U] denote the velocity errors of SINS; [[empty set].sub.E], [[empty set].sub.N], [[empty set].sub.U] denote the attitude errors of SINS; [[epsilon].sub.cx], [[epsilon].sub.cy], [[epsilon].sub.cz] denote the random bias errors of the gyro in SINS; [[epsilon].sub.rx], [[epsilon].sup.ry], [[epsilon].sub.rz] denote the random walk process errors of the gyro in SINS; and [[nabla].sub.x], [[nabla].sub.y], [[nabla].sub.z] denote the random walk process errors of the accelerometer in SINS.

Then, the state equation of the EKF can be represented as

[??] = FX + Gw, (21)

where F is a 18 x 18 SINS error matrix [17] and G is a 18 x 9 system noise allocation matrix given by

[mathematical expression not reproducible]. (22)

[omega] is the system noise vector with zero mean, that is,

where [[omega].sub.gx], [[omega].sub.gy], an [[omega].sub.gz] are the gyro random white noises; [[omega].sub.rx], [[omega].sub.ry], and [[omega].sub.rz] are the first-order Markov-driven white noises of gyro; and [[omega].sub.ax], [[omega].sub.ay], and [[omega].sub.az] are the first-order Markov-driven white noises of the accelerometer.

The covariance matrix of system noise vector [omega] is

[mathematical expression not reproducible], (24)

where [mathematical expression not reproducible] deg/hour, [mathematical expression not reproducible] deg/hour, and [mathematical expression not reproducible] g/hour.

The estimated navigation errors are expected to be used for correcting the navigation parameters of SINS. There are two methods to correct the navigation parameters [18]: one is to directly correct the outputs of SINS by taking the estimated value of the navigation error as the feedback, which is called the output correction method; the other way is to feed the estimation value of the navigation error into the SINS settlement process, which is called the feedback correction method. The output correction method only changes the accuracy of the output, rather than the error state inside the inertial navigation system. In this way, the errors will accumulate gradually over time, making the difference between the ideally mathematical model and the actual navigation system larger and larger, and the accuracy of the integrated navigation system becomes poor. Therefore, the feedback correction method is adopted in this paper in order to increase the accuracy of SINS.

Finally, the structure of the optical flow sensor/SINS integrated navigation scheme based on EKF is shown in Figure 4. It is mainly composed of the following four modules.

(1) SINS. The microelectromechanical system- (MEMS-) based SINS outputs the attitudes, position, and velocity measurements with drifts and high noises. As time goes by, the navigation errors of SINS will gradually increase, so some other navigation devices should take the role of correcting the SINS errors, such as the optical flow sensors introduced before.

(2) A Group of Multiple Optical Flow Sensors. Multiple optical flow sensors are mounted on a MAV at different positions with different viewing directions. As a result, the optical flow information in different directions around the MAV can be adequately detected.

(3) Prediction of the Optical Flow Using the SINS Measurements. Optical flow can be predicted by using (19) with the SINS measurements as the input. [[delta].sub.OF] is the difference of the optical flow sensor measurements and the predicted ideal optical flow. The optical flow difference [[delta].sub.OF] is taken as the measurements of EKF. So the measurement equation of EKF is

Z = HX + v, (25)

where

[mathematical expression not reproducible], (26)

H is the measurement matrix [17], and uis the measurement noise vector with zero mean, whose covariance matrix is R = diag [[r.sup.2.sub.OF] [r.sup.2.sub.OF] where [r.sub.OF] = 0.001 rad/s.

(4) Extended Kalman Filter (EKF). A centralized extended Kalman filter is first performed in this study for fusion of inertial and optical flow information. The state equation of EKF is expressed by (21), and the measurement equation is expressed by (25). The navigation errors of SINS can be corrected by using the navigation error estimation feedback of the EKF.

4. Sensor Placement Optimization

As mentioned in the previous section, multiple optical flow sensors are fixed on the MAV. So the placement scheme of optical flow sensors should be taken into account, which consists of the following three issues:

(a) The positions where the optical flow sensors are mounted

(b) The orientations which the optical flow sensors are facing

(c) The number of optical flow sensors

4.1. Mounting Position of Optical Flow Sensors. As illustrated in Figure 3, different [r.sub.bOF] means that the optical flow sensors are fixed on different positions of the MAV. Several magnitudes of [r.sub.bOF] are used for investigating the influence of optical flow sensor positions on the navigation performance. The placement position and angle of the optical flow sensors in the MAV body frame Sb are shown in Tables 1-4. The sensor mounting angles in each case are identical.

It can be concluded from Figure 5 that the navigation performances of each case are nearly the same, though the mounting positions of each case are very different. In other words, mounting positions of the optical flow sensors have little influence on the performance of the navigation system. On the other hand, if the distances between each optical flow sensor and the gravity centre of the MAV are as long as possible (case 4), the longitudinal velocity navigation error is slightly smaller than the ones in other cases, which means that the navigation performance is slightly better.

4.2. Mounting Orientation of Optical Flow Sensors. We expect that the optical flow sensors can be as sensitive as possible to the motion state changes of MAV. From (15), (16), (17), (18), and (19), it can be found that the magnitude of optical flow is a function of the MAV motion state (x) and the optical flow sensor mounting angles ([mu], [eta]). Thus,

OF = [square root of [OF.sup.2.sub.x]+[OF.sup.2.sub.y]] = g(x,[mu],[eta]), (27)

where g(*) is a function of x, [mu], and [eta].

Let [DELTA]OF represent the optical flow sensitivity, the sensitivity [DELTA]OF can be defined as

[DELTA]OF = g([x.sub.0] + [DELTA][x.sub.0], [mu], [eta]) - g([x.sub.0], [mu], [eta]), (28)

where [x.sub.0] is the motion state of MAV at a certain moment and [DELTA][x.sub.0] is the motion state change of MAV. Here, the motion state and its change of a MAV for investigating the influence of mounting angles on the sensor sensitivity are shown in Table 5.

Based on the current state shown in Table 5, the mounting angles [mu] and [eta] change from 120 to 240 degrees and -60 to 60 degrees, respectively. The relationship between the measuring sensitivity and the mounting angles of optical flow sensors is illustrated in Figure 6. The results indicate that when the optical flow change is the most obvious (in other words, when the sensitivity of the optical flow sensor is the highest), the ranges of mounting angles are

[mu] [member of] (130[degrees], 185[degrees]) [intersection] [eta] [member of] (-50[degrees],-30[degrees]), [mu] [member of] (175[degrees], 225[degrees]) [intersection] [eta] [member of] (30[degrees], 50[degrees]). (29)

4.3. Number of Optical Flow Sensors. As we know, more optical flow sensors provide more measurement information, and the redundancy of the navigation system can also be enhanced, making the system more reliable. However, too much information input from optical flow sensors will increase the computational burden of the on-board computer on MAV. As a result, the real-time performance of the integrated navigation algorithm may fall. Moreover, too many optical flow sensors will also increase the payload mass of MAV. In contrast, too few optical flow sensors will lead to poor estimating accuracy of the EKF filter.

Different quantities of optical flow sensors are fixed on the MAV for investigating the influence on the navigation performance. The first one, the first two, the first three, and all optical flow sensors in Table 4 are installed on the MAV in turn.

Taking the longitudinal velocity error as an example, the results shown in Figure 7 indicate that, with the aid of optical flow information, the navigation error is obviously smaller. If the number of optical flow sensors is more than one, the navigation error will be further reduced, about 1/10 of the navigation error from the one optical flow sensor-aided inertial navigation system.

Moreover, if the quantity of optical flow sensors is greater than two, the navigation performance of the integrated navigation system is the same as the inertial navigation system aided with two optical flow sensors. Considering the navigation accuracy, computational cost, payload mass, and redundancy, three optical flow sensors are recommended to be mounted on the MAV.

In summary, the optimal placement scheme shown in Table 6 is that three optical flow sensors are mounted on the MAV, as far apart as possible from each other and the gravity centre of the MAV. The optimal mounting angles can be referred to Formula (29).

5. Numerical Simulation Using a Centralized EKF

To test the performance of the optical flow sensor/SINS integrated navigation scheme, three optical flow sensors are mounted on the MAV using the optimal placement scheme shown in Table 6. Some other initial parameters for this simulation can be seen in Table 7.

Navigation errors of the optical flow sensor/SINS integrated navigation scheme using a centralized EKF compared with the navigation system using SINS only are displayed in Figure 8. The results indicate that the navigation errors (including position, velocity, and attitude errors) become much smaller after optical flow sensors aided. The correctness and effectiveness of the optical flow aided inertial navigation scheme are verified.

In contrast, the premise that the optical flow sensor can work well is that the detection plane has sufficient texture, and the illumination is sufficient. Hence, if the MAV makes a big maneuver, some optical flow sensor(s) may face the sky, which leads to the failure of optical flow sensors as a result of zero measurement. In this case, the navigation system will diverge, suffering from inputting wrong information.

For example, let the output of the third optical flow sensor in Table 6 be zero from 300 s to 700 s; the longitude error of the integrated navigation system using centralized EKF is illustrated in Figure 9.

As can be seen in Figure 9, the navigation error immediately diverges, when one of the optical flow sensors fails. Accordingly, an integrated navigation scheme with fault detection and isolation ability is proposed in the subsequent sections.

6. Fault-Tolerant Integrated Navigation Scheme

6.1. Decentralized Filter. There are two ways to achieve an optimal integrated navigation system based on the extended Kalman filter technology: one is centralized Kalman filter, and the other is decentralized Kalman filter.

The centralized Kalman filter is the method used in the previous sections, which utilizes only one Kalman filter to process all the information of all the navigation subsystems (SINS and optical flow sensors). The centralized EKF integrates the SINS subsystem with the optical flow sensor subsystems optimally to estimate the navigation error state and then correct the SINS subsystem by using the optimal estimation of the navigation error state. Thus, navigation precision can be improved relative to the navigation system using the SINS only.

The decentralized filter used in this study is a federated filter. By using the principle of "information distribution," the best compromise can be obtained through adjusting the distribution weights of subsystem information in the navigation system. Therefore, the federal filter used in this project can achieve the following advantages against the centralized filter:

(1) The fault-tolerant performance of the decentralized filter is better. When one or more navigation subsystems fail, the fault can be detected and isolated. Afterwards, the remaining normal navigation subsystems can be reassembled or reconfigured quickly to get the required accurate filtering solution.

(2) For the centralized filter, the increase in state dimension will bring the so called curse of dimensionality, which makes the computational cost increase dramatically. In addition, reducing the filter dimension will lose the filtering accuracy and even cause filtering divergence. As for the decentralized filter, the synthesis (fusion) algorithms of both local filters and the master filter are simple, with less computation, so as to facilitate the real-time implementation of the navigation algorithm.

Referring to Figure 10, the workflow of the federated filter can be summarized as follows:

(1) The initial estimated covariance matrixes of local filters and the master filter are set to [[gamma].sub.i] (i =1,2, ..., N, m) times that of the integrated navigation system; [[gamma].sub.i] satisfies the principle of information conservation equation, that is,

1/[[gamma].sub.1] + ... + 1/[[gamma].sub.N] = N,

0 [less than or equal to] 1/[[gamma].sub.i] [less than or equal to] 1. (30)

(2) The process noise covariance matrixes of local filters and the master filter are set to [[gamma].sub.i] times that of the combined system process.

(3) Each local filter obtains the local estimation by processing its own measurement information.

(4) The local estimation of each local filter and the estimation of the master filter are integrated according to

[[??].sub.g] = [P.sub.g][N.summation over (i=1)][P.sup.-1.sub.ii][[??].sub.i],

[P.sub.g] = [([N.summation over (i=1)][P.sup.-1.sub.ii]).sup.-1], (31)

where [[??].sub.i] is the ith local state estimation and [P.sub.ii] is the corresponding estimated covariance matrix. The local estimates are not related to each other, namely, [P.sub.ij] = 0 (i [not equal to] j). [[??].sub.g] is the global optimal estimate, whose corresponding covariance matrix is [P.sub.g].

(5) The global optimal estimate is used to reset the state estimation and covariance matrix of local filters and the master filter.

The determination of the information distribution coefficient [[beta].sub.i] = 1/[[gamma].sub.i] is crucial to the decentralized filter [19]. In this study, considering the fault tolerance and computational complexity of the decentralized filter, the information distribution coefficient of the master filter is [[beta].sub.m] = 0, and the information distribution coefficient of each local filter is [[beta].sub.i] = 1/N (i = 1, 2, ..., N, m). Therefore, the estimation of the master filter is actually the global estimation, namely,

[[??].sub.m] = [[??].sub.g] = [P.sub.g]([P.sup.-1.sub.1][[??].sub.1] + ... + [P.sup.-1.sub.N][[??].sub.N]). (32)

6.2. Fault Detection and Isolation. In the fault-tolerant integrated navigation system, the validity of the measurement information of each local filter must be determined in real time so as to determine which local state estimations are used to calculate the global state estimation. This requires that a real-time fault detection and isolation algorithm should be performed in the local filtering process. As a result, once a fault is detected, the fault can be isolated. Consequently, by reconstructing the system information, the whole navigation system will not fail due to the failure.

A discrete system model with failure is expressed by

[X.sub.k] = [[PHI].sub.k,k-1][X.sub.k-1]+[[GAMMA].sub.k-1][W.sub.k-1],

[Z.sub.k] = [H.sub.k][X.sub.k]+[V.sub.k]+[f.sub.k,[phi]][gamma], (33)

where [Z.sub.k] [member of] [R.sup.m] is the measurement of the navigation system. [X.sub.k] [micro] [R.sup.n] is the system state. [[PHI].sub.k,k-1] [member of] [R.sup.nxn] is the one-step transformation matrix of the system state. [[GAMMA].sub.k-1] [member of] [R.sup.nxr] is the system noise matrix. [W.sub.k] [member of] [R.sup.r] and [V.sub.k] [member of] [R.sup.m] are mutually independent Gauss white noise sequences with the following constrains:

E{[W.sub.k]} = 0,E{[W.sub.k][W.sup.T.sub.j]} = [Q.sub.k][[delta].sub.kj],

E{[V.sub.k]} = 0,E([V.sub.k][V.sup.T.sub.j] = [R.sub.k][[delta].sub.kj], (34)

where [[delta].sub.kj] is the Dirac [delta] function. [gamma] is a random vector, which indicates the degree of a fault. [f.sub.k,[phi]] is a piecewise function

[mathematical expression not reproducible], (35)

where [phi] is the occurrence moment of the fault.

Here, the residual [chi square] detection method is used to detect and isolate the faults of the navigation system.

The residual of each local Kalman filter is

[r.sub.k] = [Z.sub.k]-[H.sub.k][[??].sub.k/k-1], (36)

where the forecast value [[??].sub.k/k-1] is

[[??].sub.k/k-1] = [[PHI].sub.k/k-1][[??].sub.k-1]. (37)

It can be proved that the residual of Kalman filter [r.sub.k] is a Gauss white noise whose mean is zero when no fault occurs, and its variance is

[A.sub.k] = [H.sub.k][P.sub.k/k-1][H.sup.T.sub.k][R.sub.k]. (38)

When a fault occurs, the mean of residual [r.sub.k] is no longer zero. Therefore, by monitoring the mean of residuals, it is possible to determine whether the navigation system has failed.

The fault detection function is defined as

[[lambda].sub.k] = [r.sup.T.sub.k][A.sup.-1.sub.k][r.sub.k], (39)

where [[lambda].sub.k] is subject to [chi square] distribution with m degrees of freedom, namely,

where m is actually the dimension of measurement [Z.sub.k]. Here, m = 2.

Finally, the fault criteria are as follows:

If [[lambda].sub.k] > [T.sub.D], a fault occurs,

If [[lambda].sub.k] [less than or equal to] [T.sub.D], no fault occurs, (41)

where the preset threshold [T.sub.D] is determined by the false alarm rate [P.sub.f]. Given the false alarm rate [P.sub.f] = [alpha], the [chi square] distribution can give out the threshold value [T.sub.D].

In this study, [P.sub.f] = [alpha] = 0.001 and m = 2, then according to the [chi square] distribution, the threshold value [T.sub.D] = 13.82.

As each local filter is designed for the measurement information from each optical flow sensor, the fault detection and isolation algorithm in the local filter can be used for detecting the invalid measurement information and the corresponding local error state estimation. Using the remaining correct local state estimation, the reliable state estimation of the whole navigation system can be obtained according to the decentralized filter proposed above. The structure of the above proposed fault-tolerant optical flow sensor/SINS integrated navigation scheme is shown in Figure 11.

6.3. Numerical Simulation Using the Fault-Tolerant Integrated Navigation Algorithm. As before, three optical flow sensors are mounted on the MAV using the optimal placement scheme shown in Table 6. We assume that the third optical flow sensor in Table 6 fails (or no apparent texture on its detected plane) during 300s-700s with zero outputs, as can be seen in Figure 12. Some other initial parameters are shown in Table 7.

As shown in Figure 12, when the optical flow sensor fails, the value of fault detection function [[lambda].sub.3] sharply increases, much greater than the threshold value [T.sub.D] = 13.82.

In the subsequent figures (Figures 13-15), the "fault zone" is the period that the optical flow sensor fails, and the normal working state is restored after 700 s.

The meanings of legends in each figure are as follows:

(1) "DF-no fault": the navigation error of using the decentralized filter for MAV navigation. During the flight of MAV, the optical flow sensors have no fault.

(2) "DF-FDI": the navigation error of using the decentralized filter with fault detection and isolation (FDI) capability for MAV navigation. During 300 s to 700 s, the outputs of the third optical flow sensor in Table 6 are set to zero.

(3) "DF-no FDI": as in the previous case, the outputs of the third optical flow sensor in Table 6 are set to zero during 300 s to 700 s. The difference is that the decentralized filter does not have the FDI capability.

(4) "CF-no fault": the navigation error history of using the centralized filter for MAV navigation. During the flight of MAV, the optical flow sensors have no fault.

(5) "CF-fault": the navigation error of using the centralized EKF for MAV navigation. Of course, the centralized filter does not have the FDI capability. During 300 s to 700 s, the outputs of the third optical flow sensor in Table 6 are set to zero.

For the centralized filter and decentralized filter without FDI capability, the global estimation immediately diverges when the optical flow sensor fails, causing the failure of the whole system, namely, "failure of FDI-free filters," shown in the following figures. In contrast, the decentralized filter with the FDI capability can detect and isolate the sensor failure in time to make the navigation system work properly, namely, "effectiveness of the FDI filter" as can be seen in the following figures. The results verify the correctness and effectiveness of the designed fault-tolerant optical flow sensor/SINS integrated navigation scheme in the presence of system-level failures.

In addition, as can be seen in the figures, the curves of "DF-no fault" and "CF-no fault" are consistent, and the curves of "DF-no FDI" and "CF-fault" are consistent, which indicates that the centralized filter and decentralized filter are mathematically equivalent.

In order to detect the effect of the sampling period on the performance of simulation, the 1200 s MAV flight simulation using the flight navigation scenarios proposed in this manuscript are performed in desktop environments (Intel[R] Core[TM] i7-6700 CPU at 3.40 GHz, Matlab R2015a), under different sampling periods (0.1 s, 0.02 s, 0.01 s, 0.002 s, and 0.001 s). The simulation results are obtained from 50 runs of Monte Carlo.

Three navigation schemes are considered in the simulation:

(1) Navigation only based on INS (inertial navigation system), called "INS only" in Table 8

(2) Optical flow sensor/INS-based integrated navigation scheme using the centralized EKF, called "centralized EKF" in Table 8

(3) Optical flow sensor/INS-based integrated navigation scheme using decentralized EKF with FDI (fault detection and isolation) ability, called "FDI-EKF" in Table 8

Here, two criteria are used for evaluating the simulation performance. One is the computing time and the other is the navigation error. Furthermore, the criterion employed for evaluating the navigation error with respect to the sampling period is the RMSE (root mean square error), over M Monte Carlo runs at a given time k, which is defined by

[RMSE.sup.x.sub.k] = [1/M[M.summation over (i=1)][[[([x.sup.(i)[sup.k]-[[??].sup.(i).sub.k]).sup.2]].sup.1/2]

(k = 1,2, ...), (42)

where superscript i denotes quantities on the ith run. [x.sup.(i)[sup.k] denotes the truth of one of the MAV motion states at time k and for the ith run of Monte Carlo simulation; the same is [[??].sup.(i).sub.k] but being the estimation of the state.

Taking the RMSE of [V.sub.N] as an example, as indicated from Figure 16, the shorter the sampling period, the smaller the navigation error. On the other hand, the average computing time spent in 50 Monte Carlo simulations of different navigation algorithms under different sampling periods is shown in Table 8. It can be found that the shorter the sampling period, the longer the computation time used for simulation. When the decentralized EKF with FDI ability is deployed under the 0.001s sampling period, the computing time (1911.1021s) has even exceeded the simulation time (1200 s), which means that the navigation algorithm cannot be implemented in real time.

Conversely, as can be seen in Table 8, the larger the sampling period, the shorter the computing time. Actually, increasing the sampling period makes the amount of computation smaller. However, as shown in Figure 16, if the sampling period is too long (e.g., 0.1 s), the filter will diverge.

Accordingly, a series of sampling periods are employed in the simulation in order to determine an appropriate one for the optical flow sensor/INS-based integrated navigation scheme using decentralized EKF with FDI ability (FDI-EKF). As mentioned above, the filter diverges if the sampling period is too long. For the FDI-EKF, after many simulations, it is found that the navigation scheme can work correctly when the sampling period is equal to or less than 0.02 s.

As portrayed in Figure 17, when the sampling period is reduced from 0.02 s to 0.01s (i.e., reduced by 1/2), the navigation error is greatly reduced. However, although further reduction in the sampling period will result in a slight reduction in the navigation error, Table 8 indicates that the computing time will multiply exponentially, even more than the 1200 s simulation time. Finally, 0.01 s is selected as the sampling period in Table 7 after the trade-off between the navigation accuracy and computing time.

7. Conclusions

In this paper, an optical flow sensor/SINS integrated navigation scheme for MAV is developed for reducing the navigation error of the inertial navigation system through the decentralized EKF with fault detection and isolation ability. The EKF fuses the inertial and optical flow measurement information for estimating the navigation error. Then, the estimated navigation error is used for correcting the SINS measurements. The results show that the integrated navigation algorithm can effectively reduce inertial navigation errors and isolate sensor failures. Some experiments will be done to test the performance of the fault-tolerant optical flow sensor/SINS integrated navigation scheme.

https://doi.org/10.1155/2018/9678505

Data Availability

The datasets used or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC) (Grant no. 61403007).

References

[1] R. Strydom, A. Denuelle, and M. Srinivasan, "Bio-inspired principles applied to the guidance, navigation and control of UAS," Aerospace, vol. 3, no. 3, 2016.

[2] N. Franceschini, "Small brains, smart machines: from fly vision to robot vision and back again," Proceedings of the IEEE, vol. 102, no. 5, pp. 751-781, 2014.

[3] M. V. Srinivasan, S. Zhang, M. Altwein, and J. Tautz, "Honeybee navigation: nature and calibration of the 'odometer'," Science, vol. 287, no. 5454, pp. 851-853, 2000.

[4] M. V. Srinivasan, M. Lehrer, W. H. Kirchner, and S. W. Zhang, "Range perception through apparent image speed in freely flying honeybees," Visual Neuroscience, vol. 6, no. 5, pp. 519-535, 1991.

[5] X. Liu, Z. Chen, W. Chen, and X. Xing, "Motion estimation using optical flow sensors and rate gyros," in 2016 IEEE International Conference on Mechatronics and Automation, pp. 1708-1715, Harbin, China, August 2016.

[6] J. Chahl, "Unmanned aerial systems (UAS) research opportunities," Aerospace, vol. 2, no. 2, pp. 189-202, 2015.

[7] F. Ruffier and N. Franceschini, "Optic flow regulation: the key to aircraft automatic guidance," Robotics and Autonomous Systems, vol. 50, no. 4, pp. 177-194, 2005.

[8] F. Ruffier and N. Franceschini, "Optic flow regulation in unsteady environments: a tethered MAV achieves terrain following and targeted landing over a moving platform," Journal of Intelligent & Robotic Systems, vol. 79, no. 2, pp. 275-293, 2015.

[9] C. M. Sabo, A. Cope, K. Gurney, E. Vasilaki, and J. Marshall, "Bio-inspired visual navigation for a quadcopter using optic flow," in AIAA Infotech @ Aerospace, San Diego, CA, USA, January 2016.

[10] L. Zhang, Z. Xiong, J. Lai, and J. Liu, "Optical flow-aided navigation for UAV: a novel information fusion of integrated MEMS navigation system," Optik, vol. 127, no. 1, pp. 447-451,2016.

[11] C. Shen, Z. Bai, H. Cao et al., "Optical flow sensor/INS/magnetometer integrated navigation system for MAV in GPS-denied environment," Journal of Sensors, vol. 2016, Article ID 6105803, 10 pages, 2016.

[12] P. Lyu, J. Lai, H. H. T. Liu, J. Liu, and W. Chen, "A model-aided optical flow/inertial sensor fusion method for a quadrotor," Journal of Navigation, vol. 70, no. 2, pp. 325-341, 2017.

[13] D. Honegger, L. Meier, P. Tanskanen, and M. Pollefeys, "An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications," in 2013 IEEE International Conference on Robotics and Automation, pp. 1736-1741, Karlsruhe, Germany, May 2013.

[14] M. Verveld, Q.-P. Chu, C. De Wagter, and J. A. Mulder, "Optic flow based state estimation for an indoor micro air vehicle," in AIAA Guidance, Navigation, and Control Conference, Toronto, Ontario, Canada, August 2010.

[15] D. J. Lee, M. Byoung-Mun, T. Min-Jea, B. Hyochoong, and D. H. Shim, "Autonomous flight control system design for a Blended Wing Body," in 2008 International Conference on Control, Automation and Systems, pp. 192-196, Seoul, South Korea, October 2008.

[16] Y. Watanabe, P. Fabiani, and G. Le Besnerais, "Air-to-ground target tracking in a GPS-denied environment using optical flow estimation," in AIAA Guidance, Navigation, and Control Conference, Chicago, IL, USA, August 2009.

[17] X. Liu, Z. Chen, W. Chen, and X. Xing, "Multiple optical flow sensors aiding inertial systems for UAV navigation," in 2016 UKACC 11th International Conference on Control (CONTROL), pp. 1-7, Belfast, UK, September 2016.

[18] J. Li, INS/CNS/GNSS Integrated Navigation Technology, National Defence Industry Press, 2015.

[19] N. A. Carlson, "Federated filter for fault-tolerant integrated navigation systems," in IEEE PLANS '88., Position Location and Navigation Symposium, Record. 'Navigation into the 21st Century', pp. 110-119, Orlando, FL, USA, December 1988.

Zhongyuan Chen,(1) Wanchun Chen, (1) Xiaoming Liu,(1) and Chuang Song (2)

(1) School of Astronautics, Beihang University, Beijing 100191, China

(2) Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory, Beijing Electro- Mechanical Engineering Institute, Beijing 100074, China

Correspondence should be addressed to Xiaoming Liu; liuxiaoming_bh@163.com

Received 18 May 2018; Accepted 13 August 2018; Published 20 September 2018

Academic Editor: Paolo Bruschi

Caption: Figure 1: Perspective projection model of an optical flow sensor.

Caption: Figure 2: An optical flow sensor is moving relative to its object plane at a velocity v, a height above the ground h, and a yaw rate [omega]. The angle between the optical axis and the plumb line is [theta]. f is the focal length of the optical flow sensor.

Caption: Figure 3: Three coordinate systems used for modelling the blended wing body MAV [17]. [r.sub.nb], [r.sub.bOF], and [r.sub.nOF] are used to express the relative position of the origin of each coordinate system.

Caption: Figure 4: Optical flow sensor/SINS integrated navigation system. OFS: optical flow sensor.

Caption: Figure 5: History of the longitudinal velocity navigation error.

Caption: Figure 6: The relationship between the sensitivity and the mounting angles of optical flow sensors.

Caption: Figure 7: The influence of the optical flow sensor amount on the estimation performance. The blue, red, yellow, purple, and green curves are the navigation errors of SINS only, one optical flow sensor-aided, two optical flow sensor-aided, three optical flow sensor-aided, and four optical flow sensor-aided inertial navigation system, respectively.

Caption: Figure 8: Navigation errors of SINS only and optical flow sensor-aided integrated navigation system: (a) position errors, (b) velocity errors, and (c) attitude errors.

Caption: Figure 9: Centralized EKF will rapidly diverge if one optical flow sensor fails (e.g., when the output of the optical flow sensor is zero).

Caption: Figure 10: Block diagram of the decentralized filter.

Caption: Figure 11: Structure of the fault-tolerant optical flow sensor/SINS integrated navigation scheme.

Caption: Figure 12: The value of fault detection function [[lambda].sub.3] sharply increases as soon as the optical flow sensor fails. (a) Measurements of the optical flow sensor with a fault during 300 s to 700 s. (b) The history value of fault detection function [[lambda].sub.3].

Caption: Figure 13: Position errors: (a) longitude error, (b) latitude error, and (c) height error.

Caption: Figure 14: Velocity errors: (a) eastward velocity error, (b) northward velocity error, and (c) upward velocity error.

Caption: Figure 15: Attitude errors: (a) roll angle error, (b) pitch angle error, and (c) estimation of yaw error.

Caption: Figure 16: The RMSE of [V.sub.N] from 50 Monte Carlo runs when the centralized EKF is deployed for optical flow sensor/INS integrated navigation. The shorter the sampling period, the smaller the navigation error. A too long sampling period (0.1 s) will lead to filter divergence.

Caption: Figure 17: The RMSE of [V.sub.N] from 50 Monte Carlo runs when the decentralized EKF with FDI ability is deployed for optical flow sensor/INS integrated navigation.

Table 1: Case 1: optical flow sensors are mounted at the centre of gravity of the MAV. Sensor identifier [X.sub.b](m) [y.sub.b] (m) [Z.sub.b] (m) 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 Sensor identifier [mu] (rad) [eta] (rad) 1 n [pi]/6 2 5[pi]/6 0 3 7[pi]/6 0 4 [pi] -[pi]/6 Table 2: Case 2: optical flow sensors are mounted closely on the MAV. Sensor identifier [X.sub.b](m) [y.sub.b] (m) [Z.sub.b] (m) 1 0 0.1 0 2 0.1 0 0 3 -0.1 0 0 4 0 -0.1 0 Sensor identifier [mu] (rad) [eta] (rad) 1 n [pi]/6 2 5[pi]/6 0 3 7[pi]/6 0 4 [pi] -[pi]/6 Table 3: Case 3: the coordinate values of the sensor mounting positions are half of those in case 4. Sensor identifier [X.sub.b](m) [y.sub.b] (m) [Z.sub.b] (m) 1 0 0.1 0 2 0.38 0 0 3 -0.38 0 0 4 0 -0.1 0 Sensor identifier [mu] (rad) [eta] (rad) 1 [pi] [pi]/6 2 5[pi]/6 0 3 7[pi]/6 0 4 [pi] -[pi]/6 Table 4: Case 4: mounting positions of optical flow sensors are wingtips, nose, and tail of the MAV (The distances between each sensor and the gravity centre of the MAV are as long as possible.) Sensor identifier [x.sub.b](m) [y.sub.b] (m) [Z.sub.b] (m) 1 0 0.2 0 2 0.76 0 0 3 -0.76 0 0 4 0 -0.2 0 Sensor identifier [mu] (rad) [eta] (rad) 1 [pi] [pi]/6 2 5[pi]/6 0 3 7[pi]/6 0 4 [pi] -[pi]/6 Table 5: Motion state of the MAV at a certain moment and its change. Parameters Velocity Height Pitch angle Yaw angle (m/s) (m) (deg) (deg) [x.sub.0] 10 10 5 5 [DELTA][x.sub.0] 0.1 0.1 0.1 0.1 Parameters Roll angle (deg) [x.sub.0] 0 [DELTA][x.sub.0] 0.1 Table 6: Optimal placement scheme of the optical flow sensors. Sensor identifier [X.sub.b](m) [y.sub.b] (m) [Z.sub.b] (m) 1 0 0.2 0 2 0.76 0 0 3 -0.76 0 0 Sensor identifier [mu] (rad) [eta] (rad) 1 [pi] [pi]/6 2 5[pi]/6 0 3 7[pi]/6 0 Table 7: Initial parameters of the simulation. Parameter Magnitude Sampling period 0.01s Random bias of the 10deg/h gyro in SINS Random walk process 0.005 g (g = 9.8 m/[s.sup.2]) noise standard deviation of the accelerometer in SINS Initial position [lambda] =120[degrees], L = 30[degrees], h = 1000 m Initial attitude [??] = [pi]/6 rad, [gamma] = 0 rad, (pitch, roll, and yaw) [psi] = [pi]/4 rad, [psi] = [pi]/4 Initial velocity [V.sub.E] = 200 m/s, [V.sub.E] = 200 m/s, [V.sub.U] = 0 m/s Initial error state [X.sub.0] = 0 Table 8: Computing time under different sampling periods. Sampling period (s) Computation time (s) INS only Centralized EKF FDI-EKF 0.1 0.9432 4.1114 -- 0.02 3.8230 19.0431 95.7955 0.01 7.5288 37.9740 193.7792 0.002 37.9699 188.8640 959.3003 0.001 74.8733 372.7788 1911.1021

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Chen, Zhongyuan; Chen, Wanchun; Liu, Xiaoming; Song, Chuang |

Publication: | Journal of Sensors |

Geographic Code: | 9CHIN |

Date: | Jan 1, 2018 |

Words: | 8897 |

Previous Article: | Self-Learning-Based Data Aggregation Scheduling Policy in Wireless Sensor Networks. |

Next Article: | Remote Sensing of Sustainable Ecosystems. |

Topics: |