# Information-Fusion Methods Based Simultaneous Localization and Mapping for Robot Adapting to Search and Rescue Postdisaster Environments.

1. IntroductionMobile robots perform simultaneous localization and mapping (SLAM) task in search and rescue (SAR) postdisaster scenarios, while building an exploration global map and localizing themselves with this map. Modeling decision on information-fusion was earlier put up by Torra and Narukawa [1], and parallel tracking and mapping (PTAM) strategy [2] and keyframes subblocks matching [3] were used for robotics mapping and localization. Murphy et al. [4] proposed that SLAM ability is necessary for mobile robots conducting SAR task. Motivating by the researches, we developed that information-fusion SLAM (IF-SLAM) of multirobots and multisensors (MAM) autonomously explores an SAR area without recourse to global positioning system (GPS). Multiple sensors are mounted on board multirobots. The information from several sources or from different times of the same sensor is integrated into one single datum to implement SLAM assignment for each robot. An image matching solution incorporates extraction of overlap regions and key frames extraction of overlap regions to rebuild through subblocks matching, and the imagine fusion of matching features transforms 3D object of interest in the real world into 2D image in global map. The multisensors integration block provides continuous online information about the mobile robots and its local environment.

The algorithms of integration particle filter (IPF) and optimal improved extended Kalman filter (IEKF) combine traditional methods of consistent Rao-Blackwellized (R-B) particle filters embedded in MAM IF-SLAM architecture. This architecture consists of multirobots and multisensors including laser range finder (LRF) sensors, localization sonars, gyro odometry, Kinect-sensor, RGB-D camera, and other proprioceptive sensors mounted on board multiple robots.

Mobile robots have a great potential to assist in the underground operations, searching ahead of rescue teams and reporting conditions that maybe hazardous to the teams. Robots can become the invaluable tool. The SLAM challenge appears when a mobile robot attempts to build a map of cluttered SAR environment while simultaneously localizing itself relative to this map. Robotic SLAM in SAR environment confronts the difficulties on random-exploration trajectories. These difficulties include the relationships between the map accuracy relative to its steady-state motion behavior and a simultaneous estimate of both robot and landmark locations. Underground mine rescue confronts the same issues in the use of belays and tethers for operating down voids or on slopes.

Generally, mobile robots do not have the luxury of GPS localization while exploring underground [5]. Multiple robots on-board multisensors SAR SLAM combining measurements and trajectories has improved SLAM performances in object identification, area coverage, and loopclosure, which is necessary for robotic search and rescue tasks.

Multiple robots cooperatively acquired the surrounding's information in wide research work [1, 6-9], and an integration strategy of different kinds of measurements was used in multimode sense of 3D perception. Pfingsthorn et al. [7] updated the local submaps through their graph-based structures, and made it be efficiently communicated among mobile robots' sensors. Using a group of mobile robots equipped with sensors for source concentration measurement, the mobile robots team cooperatively estimates the gradient of the exploration field, moves to the objects of interest, and keeps a predefined formation in movement [8]. Atanasov et al. [9] developed an acquisition method of multisensors active information capturing the common characteristics of these scenarios and controlling mobile sensing systems to improve the accuracy and efficiency of collecting information autonomously.

We developed robotics SLAM of fusion data provided by multiple robots on-board multisensors, vision fusion of images, and integration of different kinds of measurements. The on-board proprioceptive sensors perceive the surroundings to supply various information and measurements, and then the mobile robot fuses the achieved data and the exploration trajectories for SLAM processing. Usually, the landmarks are difficult to assignment and to determine their poses in SAR disaster site, collapsed and cluttered. In the proposed methods, the mobile robots explore coverage SAR collapsed area and take relative observations of landmarks, and the sensors' data fusion approaches made a few achieved landmarks localize maximally consistent with mobile robot's pose.

According to Torra and Narukawa and Milisavljevic's theory [1, 10], information-fusion is the processes which embedded the particular function mathematical algorithms in an integration architecture, which needs a special techniques for combining the information. Sensors and data fusion theory in [10] illustrated the strategy of fusion of motion targets trajectories and sensors' measurements. Literature [2, 3] proposed a monocular single camera SLAM and keyframes localization approaches; however, these methods are not suitable for SAR SLAM in urgent postdisaster areas.

Inspired by the overview research means, we designed novelty multirobots and multisensors information-fusion SLAM (IF-SLAM) architecture, in which the keyframes and environmental data obtained from MAM architecture are processed by fusion strategies, and one single datum is output for performing SLAM. Multiple robots explore SAR postdisaster area and perform SLAM task, maintaining a graph with sensor observations stored in vertices and pose differences including uncertainty information stored in edges. Integration particle filter (IPF) and optimal improved extended Karman filter (IEKF) algorithms embedded MAM platform for improving motion trajectories accuracy, which is greatly adapting to SAR postdisaster environment. The proposed SAR SLAM methods of information-fusion employ hybrid map representations to fuse the strengths of topological maps and occupancy grids; these representations facilitated multirobots mapping.

The remainder parts are arranged as follows: in Section 2, we describe the information integration architecture. In Section 3, information-fusion algorithms are elaborated and derived, and the virtual robots conduct SLAM simulations with information-fusion method in Section 4. In Section 5, the experiments are performed employing Pioneer LX robot (Adept MobileRobots Pioneer 3, Mobile Robots Inc., http://www.mobilerobots.com) and crawler mobile robot in both indoor cluttered and outdoor collapsed scenarios, respectively. In Section 6, the results are discussed, and in Section 7, we summarized our conclusions and future work.

2. Information Integration Architecture

2.1. Multiple Robots. Mobile robots installed with multisensors and initial preset metric map have been developed. According to Saeedi et al. [11], multiple robots conduct mapping by fusion of two occupancy grid submaps; Deshpande et al. [6] proposed a robotic platform equipped with camera and various sensors for multimode 3D perception. In the designed architecture of multiple robots, SLAM is achieved through the process of mapping out an unknown environment and generating a consistent map. In general, single-robot SLAM is achieved through extended Kalman filter (EKF) data fusion; this approach is then extended to multirobots SLAM by using MobileRobot Operating System (MOS). The multirobots apply to conduct exploration missions and provide a perception of its surroundings for performing SLAM in the SAR environment.

2.2. Multisensors On-Board Robots. A multisensors robotic platform intended for deployment in environmental hazardous situations. Thrun et al. [12] equipped several LRFs to register consecutive scans; 2D/3D information of scans was acquired from pointed forward LRFs. Multisensors system was developed in for acquiring accurate environmental maps of SAR environments. The system is based on an autonomous robot (i.e., a Pioneer LX) installed with LRFs and sensors; other mobile robots (i.e., Amigo robots) are equipped with various sensors. As displayed in Figure 1, the designed multiple sensors (LRFs, gyros, sonars, and cameras) are onboard multirobots for integration information.

2.3. Integration Strategies. Fusion algorithms and SLAM programs are embedded in multirobots and multisensors architecture. Zhou and Roumeliotis [13] merged local submaps that are generated by different robots independently; a robots team cooperatively built a joint environmental map; Henry et al. [14] used single RGB-D camera to build dense 3D map, as the researcher; only a two-dimensional map is lack of any depth information of surrounding objects.

Locally consistent submaps are incapable of global consistency. An optimal algorithm for fusion of local submaps which are created is based on multiple robots and sensors. The fusion of the visual and shape information achieved from cameras and laser scans is used to construct dense 3D objects mapping. The rich visual features along with random sample consensus (RANSAC) verification are given fixed data, which is fused into the iterative closest point (ICP) optimization. To build consistent maps of SAR postdisaster environment, the ICP algorithm is used for estimating global correspondences and aligning robot exploration trajectories. With fusion algorithms by integration of local occupancy grid submaps, the global consistent map is formed. The integration operators are developed to combine N measurements within an area A and output a processed value [delta]. Represent integration operators as r functions form:

[delta] = [GAMMA]([a.sub.1], ... [a.sub.N]), (1)

where [a.sub.1],..., [a.sub.N] indicate N measurement inputs, which correspond to mathematical strategies used for information-fusion.

3. Information-Fusion Algorithms Adapting to SAR SLAM

3.1. Modeling Predefined. A map of environment can be parametrized as a set of spatially located landmarks, occupancy grids, surface terrain, and sensors' measurements. The IF-SLAM builds a joint state which constitutes of the multiple robots poses and each landmark location, employing a huge state vector, which is arranged on the order of landmarks number maintained in the map, to compute the landmarks' scaling. In SLAM process, the true locations of a mobile robot are never known or measured directly. The observations are employed to estimate between true locations of robot and landmark locations. The vectors' sets are defined as follows.

(1) The previous locations of mobile robot:

[X.sub.0::k] = {[x.sub.0], [x.sub.1], ... [x.sub.k]] = {[X.sub.0::k-1], [x.sub.k]}. (2)

(2) The previous step of control inputs:

[U.sub.0::k = {[u.sub.0], [u.sub.1],..., [u.sub.k]} = {[U.sub.0::k-1], [u.sub.k]}. (3)

(3) The all landmarks' set:

m = {[m.sub.0], [m.sub.1],..., [m.sub.n]}. (4)

(4) All landmark observations' set:

[Z.sub.0::k] = {[z.sub.0], [z.sub.1], ..., [z.sub.k]} = {[Z.sub.0::k-1], [z.sub.k]}, (5)

where [x.sub.k] is the location and orientation state vector. [u.sub.k] is the control vector, which drives the mobile robot state from [x.sub.k-1] of time k-1 to [x.sub.k] of time k. mi is the vector of the z'th landmark location, which is time invariant. [z.sub.ik] is the z'th landmark observation, which is corresponding with the mobile robot location. [z.sub.k] is the multiple landmarks observations.

In integration particle filters, a degree of convergence between the mobile robots locations based EKFs and the probabilistic localization and mapping is expressed as

[mathematical expression not reproducible], (6)

where the robot locations' probability distribution is P([x.sub.k] [absolute value of [Z.sub.0:k], [U.sub.0:k], m) and P([x.sub.k], m] [Z.sub.0::k], [U.sub.0::k],[x.sub.0]) is the probable distribution.

The robot searches an object in the current scanning by employing a greedy iterative algorithm with heuristic constraints, and this object is matched with the nearest unmatched object of the last scanning. Assuming the current list objects' set Q, then the calculation time from the current location of mobile robot to each available landmark within SAR exploration area A(t) is represented in (7), and the measurement distances B(d) between mobile robot and the objects of interest are expressed as (8), respectively,

A(t) = {m [member of] Q | T(m) > t], (7)

B(d) = {m [member of] Q | norm [(m, r).sub.inf] > d}, (8)

where T(m) is the time difference between the current time and the time that the object was last measured. Norm[(m, r).sub.inf] is the minimum distance from the robot's current location to the boundary of an object m.

Equations (2)-(8) establish the predefined models of mobile robots' SLAM adapting to SAR environment.

3.2. IF-SLAM of Integration Particle Filter Algorithms. The on-board proprioceptive sensors incorporate laser measurement system (LMS), omnidirectional stereo camera, inertial measurement unit (IMU), and Kinect, which enable a mobile robot to build a map of the environment and to localize itself at the same time within this map. Olofsson et al. [15] discussed the data information and map fusion of two workspaces; the state space is estimated based on EKF and Rao-Blackwellized.

In robots IF-SLAM approach, a sequence of motion trajectories is constructed submaps by fusion of their onboard sensors data. Sensors' information algorithms consider a minimum of robot modeling; the first is based on improved extended Kalman filter (IEKF), and another is based on the Rao-Blackwellized (R-B) particle filter; an integration particle (IPF) algorithm was combination of IEKF and R-B particle filters, the mobile robot motion in the probabilistic form:

[mathematical expression not reproducible], (9)

where [x.sub.k] is the robot's pose state vector from (2), [u.sub.k] is the control vector, and [w.sub.k] represents Gauss white noise; the observation model and motion model and the prior and posterior distributions are modeled as the follows:

P([x.sub.1],[x.sub.2]) = P([x.sub.2] | [x.sub.1]) P ([x.sub.2]), (10)

[x.sup.l.sub.k+1] = [F.sup.l] [x.sup.l.sub.k] + [B.sup.l] [n.sup.l.sub.k], (11)

[q.sub.k+1] = f([[omega].sub.k] - [n.sup.w.sub.k]) [q.sup.k], (12)

[y.sup.q.sub.k] = [q.sub.k] + [e.sup.q.sub.k], (13)

[mathematical expression not reproducible] (14)

and the margined joint distribution is as follows:

P([x.sub.2]) = [1/N] [N.summation over (i=1)] ([x.sub.2] | [x.sup.(i).sub.1]), (15)

where [mathematical expression not reproducible] is sampled probability from state vector set [x.sub.k]; [x.sup.l.sub.k] is a linear part; [q.sub.k] is a nonlinear part; N is a set of weighted particles; h is evaluated particles weights.

3.3. Sensors' Measurements Data Fusion. The measurements data fusion algorithm matches relative frame-frame poses of mobile robot independently. The authors in [16-18] fuse stereo camera and inertial measurement units (IMUs) for robots estimation in GPS-denied large-scale unstructured environments, which enables the robot exploration using combination algorithms of stereo model-based outlier rejection and spatial-temporal filtering for 3D mapping and localization. In [19], the robot captures 3D scans of points of interest and combines these key points into a topological map.

Multisensors system fuses a stereo-camera sensor, inertial measurement units (IMU), and LRF odometry with an EKF algorithm to ensure robust, low-latency performance of SLAM. An initial metric map of SAR environment is preinstalled, and then this map is updated and reconstructed in real-time by fusing the multi-information achieved from their on-board sensors. A mobile robot explores in an unstructured postdisaster area while incrementally building a global consistent environmental map and locating itself in this map simultaneously. The points of interest are extracted from a group of LRFs. By fusing these interest points into a topological map, we obtain a valuable tool for exploration and localization in SAR cluttered environments. In Figure 2, three LRF sensors measurements data fusion compare each single-sensor trajectory with their fusion trajectory of measurements data.

The sensors' measurements fusion makes the nodes or particles maximally consistent with the poses of the mobile robot, and the large errors are minimized or even eliminated. A graph-based SLAM builds a simplified estimation by extracting the sensors' measurements. In a graph-based map, the nodes correspond to the poses of the robot at various points, and the edges represent the constraints between different connected poses. Such a graph-based map can be calculated by searching the spatial configuration of the nodes which are usually consistent with the data measurements modeled by the edges. A mobile robot performs perception autonomous exploration to reach goal positions in collapsed disaster scenarios by using fusion planar laser scans, stereo vision, and proprioceptive sensing system. As shown in Table 1, a measurements set corresponding to Figure 5 followed experiment is achieved from the mobile robot running SLAM in cluttered environment, which fuses multisensors of LRF, four pairs of sonars, and gyro odometry.

3.4. Trajectories Fusion. According to [20], R-B particle filters fused robot's trajectory and local submap into a distribution over potential trajectories tracked by a particle filter and a distribution of potential map for each particle. Researchers in [21, 22] detected an object by incorporating the LRFs and vision system measurements information for Kalman filter and built a submap by combined constraint data association (DDCA) approach constructing and searching a correspondence graph. Literature [23] considered fusion structures of the centralized and decentralized; however, the former increased computation and the latter partially reduced the system accuracy.

While initially motivated by problems in data association and loop-closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. We combined the two methods with centralized trajectory and decentralized trajectories of each robot. A mobile robot explores the disaster unstructured area while estimating the robot trajectory and mapping the environment, offering higher accuracy and improving the computation speed.

Trajectories fusion is a method of integrating all trajectories achieved from different sensors into an estimated trajectory, which has improved the accuracy and efficiency of reaching the objects of interest. In Figure 3, (a) displays a fusion trajectory of three different LRF sensors' information trajectories, in which the red solid line represents true trajectory and the blue dash dotted line indicates fusion trajectory by the proposed algorithms; (b) is sensor 1 trajectory error analysis relative to the mean value, (c) is sensor 2 trajectory error analysis, and (d) is sensor 3 trajectory error analysis, in which each red line represents the mean value, respectively. As shown in Table 2, the trajectory fusion can well match the true trajectory, and each single-sensor mean error is analyzed.

3.5. Error Analysis of Two Fusion Methods. In the proposed optimal IEKF and IPF algorithms, each particle performs to update on the observed landmarks as a simple mapping operation with known mobile robot pose. Optimal IEKF as the core of the pose filter results in a modular system to which new sensor measurements can easily be added. A set of criteria are used to assess the quality of 2D topological map segmentation. Based on spectral clustering of an incremental generalized Voronoi decomposition of discretized metric maps, Liu et al. [24] extracted sparse spatial information from the maps and built an environmental model which aims at simplifying exploration task for mobile robots; in [25] the ICP-based and the stereo-based T and R system on the respective data conducted offline processing.

To analyze the localization accuracy, we recorded the sensors data (laser point clouds and stereo images) of all runs during the experiment, comparing the proposed IPF algorithm with a state-of-the-art stereo-vision-based technique, and show that the novel approaches have greatly increased robustness to trajectory deviations and are less dependent on environmental conditions. In errors analysis, the error state vector X is defined as follows:

X = [[[delta].sub.p] [[delta].sub.v] [[delta].sub.[theta]] [g.sub.a] [g.sub.[theta]], (16)

where [[delta].sub.p] is the errors in location; [[delta].sub.v] is the errors in velocity; [[delta].sub.[theta]] is the errors in attitude; [g.sub.a] is sensors' biases in acceleration; [g.sub.[theta]] is the pose biases in attitude.

The mobile robot's movement model is expressed as

[X.sub.k] = [A.sub.k] [X.sub.k-1] + [w.sub.k-1] (17)

and the error measurements model is expressed as

[E.sub.k] = [H.sub.k] [X.sub.k-1] + [v.sub.k], (18)

where [w.sub.k] ~ [N.sub.1](0, [[sigma].sup.2.sub.1]), [v.sub.k] ~ [N.sub.2](0, [[sigma].sup.2.sub.2]) represent white Gaussian noise, respectively; [E.sub.k] is error measurements; [X.sub.k] is the predicted error states vector; [A.sub.k] is error measurements system matrix; [H.sub.k] is the observed system matrix. Algorithm 1 has listed the IPF algorithms procedure.

Figure 4 shows two fusion methods performance by analyzing the errors relative to mean values: (a) is the fusion of trajectories of sensor 1, sensor 2, and sensor 3; (b) is the fusion of measurements data achieved from these different sensors, in which each red line represents error mean value, and blue curves represent the biases relative to error mean value.

As displayed in Table 2, the error mean value of measurements data fusion is smaller than that of trajectories fusion; that is, the measurements fusion demonstrates a better accurate performance in getting to the objects of interest. It also can be seen that multisensors fusion of two methods are much smaller than that of a single sensors, which demonstrates that information-fusion is highly valuable in improving robots' SLAM performances for SAR environments.

Algorithm 1: Information fusion algorithms. Algorithm: Integration particle filter (IPF) procedures. (1) Q-W = noise covariance matrix, eqs. (12) (16). (2) R_v = observation matrix eqs. (1) (9) (11) (15) (17). X_t = F * X_t + W; X_Est1; X_Est2; X_Est3; X_Est123; X_Est321; (3) Error_variance = Error_sum [], eqs. (13) (14) (18).

4. SLAM Simulations with Fusion Information

The sensors' IF-SLAM can make a robot truly autonomous exploration in an uncertain collapsed area, in which both the exploration trajectory and the positions of all landmarks are estimated in real-time; a robot can build a map of an environment and simultaneously use this map to calculate its location.

Some researchers [26] have written software to simulate and implement SLAM task in Matlab, C++, and java available online. For implementation of a single robot's trajectory in particle filter SLAM, sampling the robot's poses, the observed landmarks are updated and the poses are accurate; hence, the local submap for a single particle is managed by the accuracy of trajectory. All observed trajectories provide a probabilistic location model of a robot. A robust six-degree-of-freedom relative exploration combines the iterative closest point (ICP) registration algorithm and a noise adaptive Kalman filter. This approach fuses measurements from a laser scanner and an inertial measurement unit (IMU) in a closed-loop configuration. Aghili and Su [27] integrated ICPs and Kalman filter using LRFs and IMUs to provide real-time accurate pose estimation even when poor vision data fail converging ICPs. The authors [28] presented multisensors data retrieval (MSDR) to share the measurements and information, which grants asynchronous assess to fusion system from different robots.

According to the researches, we designed simulations as shown in Figures 5(a)-5(f), in which a virtual mobile robot installed multisensors is running SLAM program with an information-fusion algorithm in a simulated SAR environment; in Figure 5(b), the red trajectories represent exploration appointed loop-closure area, the red image at the ahead of trajectory is a virtual robot, the white dot on the virtual robot represents gyro odometry, the blue band represents LRF sensor information, and the grey rays indicate sonars' signal. The measurements of mobile robot' pose are marked around the robot in real-time. Table 1 displays the measurements data achieved from multisensors informationfusion.

Through comparison of the single-sensor trajectory analysis in Figures 2 and 3 and the exploration trajectories of robot in Figure 5, we can see that information-fusion methods improve the performance of a robot SLAM adapting to SAR environment.

5. The Experiments for Multi-Information-Fusion SLAM

Milisavllevic's work of sensor and data fusion [10] illustrated the multiple sensors platform for exploration environment with many targets, its associated mathematical model, and data fusion performance evaluation for various sensors. An IMU-camera sensor system developed by Panahandeh and Jansson [29] fuses inertial data from the IMU and key features of interest captured by camera.

According to the research, the experimental system consists of a ground-facing monocular camera mounted on an inertial measurement unit (IMU); the LRF data are processed to determine the distance and the relative speed of the objects (or obstacles) located in the SAR space by the laser scanning. These SLAM methods are adaptive to outdoor unstructured environments in addressing identification of SAR objects and detection of loop-closure.

A typical experiment platform used in 2D mapping is a Pioneer LX mobile robot equipped with a SICK-LMS range finder. With the embedded arnlServer software, Pioneer LX is self-guided and self-charging. The robot includes a complete robot control system and embedded personal computer (PC), differential drive system with encode system feedback, as well as LRFs, ultrasonic (sonar) sensors, and a bumper panel. As shown in Figure 6(a), a Pioneer LX mobile robot is conducting information-fusion SLAM task with installed LRF sensors, sonars, and gyro odometry. As a result, an improved updated map is formed in Figure 6(c) compared with the initial metric map in Figure 6(b).

As shown in Figures 7(a) and 7(b), a crawler mobile robot explores the SAR collapsed area and performs SLAM task, the LRF sensors and Kinect are equipped on the robot, and thus, the robot achieves multisensors information including image frames and LRF measurements data.

6. Results and Discussion

Information-fusion systems of mobile robots are designed and programmed to cover a map with their installed multisensors. In Figures 8(a), 8(b), and 8(c), the remote desk displays the robot SLAM process from PC control center.

Torra and Narukawa [1] illustrated modeling decision of information-fusion, and the quality evaluation of performance is set in Milisavljevic's work [10] by a series of criteria of data and fusion. Literature [30] discussed local sensors fusion technique with event-based SLAM tasks, [31] presented the strategy and flowchart of a hybrid image registration algorithms, according to Reid et al. [32] and Pappas [33]; with robot operation system, the exploration tasks and mapping are subdivided among multiple robots. The use of multirobots allows SLAM tasks to complete more efficiently. As Torra and Milisavljevic's theory [1, 10] and Blum and Liu's work [31] illustrate, information-fusion is an effective means of research, and the main contributions are that we first develop the novelty multirobots and multisensors (MAM) search and rescue SLAM mathematical algorithms based on this resultful research means. Comparing to general single-sensor SLAM, we improved robotic SAR SLAM performances of loop-closure, objects identification, and exploration area coverage.

According to the observed, the robot explores an appointed area and locates itself in this environment based on a preinstalled metric map; the robot processes the achieved multisensors information using the proposed algorithms. As a result, the robot forms an improved updated map comparing to the preinstalled initial metric map.

Information data of each sensor are collected to fuse for building the entire map of covered area. A motion estimation approach of information is introduced for a vision-aided inertial exploration system. A local multisensor fusion technique with a scene-based global position applies to improve the localization of a mobile robot with limited SAR scenario on site. The multirobot integration SLAM enables teams of robots to build joint maps without initial knowledge of their relative poses.

Some conventional example approaches of information-fusion are roughly divided into two groups, which can be described nonmultiscale-decomposition-based (NMDB) fusion and multiscale decomposition-based fusion methods. Distributed multisensors trajectories fusion is used to identify the information coming from different local particles which mapped the same object of interest. The multisensors information-fusion system can be easily adapted and deployed into a variety of SAR scenarios. On the basis of these useful achievements, we design the information-fusion systems for mobile robots' SLAM adapting to SAR environment.

In Figures 6, 7, and 8, the experimental results show the efficiency and accuracy of information-fusion methods in loop-closure, exploration trajectories of robotic SLAM. The trajectory mean errors are achieved by the sensors error analysis, measurement fusion, and trajectory fusion, which are displayed in Table 1 and Figures 2 and 3. The measurements data fusion estimation and trajectory fusion estimation are simulated, respectively. The error analysis shown in Algorithm 1 and Figure 4 demonstrated that the two discussed methods of information estimation fusion are much superior to single trajectory, and simultaneously, the measurements data fusion is a little better than the trajectory fusion in reaching the targets of interest. In measurements data fusion, essentially, there is no information loss. However, the computation of trajectory fusion method is even simpler and faster in exploring and reaching the targets of interest. Comparing Figure 9(a) with Figure 8(c) and comparing Figure 9(b) with Figure 6(c), the results demonstrate the deviation of robots motion trajectories and formed imprecise map with conventional SLAM; using the proposed IF-SLAM, Figures 6 and 8 formed accurate maps marked objects, loop-closure, and area coverage.

A challenge of information-fusion systems confronts that if any one of multisensors is bad to supply large error, the estimation exploring trajectory probably results in huge inaccuracy and inefficiency, even without getting to the appointed targets. Therefore, it requires highly and strictly each of multisensors should work well in SLAM while mobile robot is exploring an SAR postdisaster environment.

7. Conclusions and Future Work

Information-fusion systems are designed for mobile robots SLAM in unstructured collapsed postdisaster scenarios. The mobile robot achieves various pieces of information from multisensors' systems and then fuses the information into an exploration integrated trajectory by using the optimal algorithms. The information-fusion incorporates two methods of the measurement fusion and trajectory fusion, which improves the SLAM performance of mobile robots through estimating the exploration trajectory so as to reaching the targets accurately and efficiently.

In this study, information-fusion systems are designed, which incorporated mobile robots, multisensors (e.g., LRF sensor, localization sonars, gyro odometry, and Kinect camera), and the algorithm that integrates R-B and EKF into an IPF procedure. We validated the proposed information-fusion methods by using a Pioneer LX and a crawler mobile robot installed multisensors. The results demonstrate that the designed information-fusion systems are well adapted to SAR scenarios and improve mobile robots SLAM performance of object identification, loop-closure, and exploration area coverage. In the future work, we will research the multisensors and the optimal fusion algorithms, which makes information-fusion method to integrate more and novel sensors and to obtain high precision estimation exploring trajectory adapted to the SAR environment.

https://doi.org/10.1155/2018/4218324

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is supported by the NSFC (National Nature Science Foundation of China) under Grants nos. 61573213, 61473174, and 61473179.

References

[1] V. Torra and Y. Narukawa, "Modeling Decisions: Information Fusion and Aggregation Operators," Cognitive Technologies, vol. 13, 2007.

[2] S. Jia, K. Wang, and X. Li, "Mobile robot simultaneous localization and mapping based on a monocular camera," Journal of Robotics, vol. 2016, Article ID 7630340,11 pages, 2016.

[3] T. Cao, H. Cai, D. Fang, H. Huang, and C. Liu, "Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching," Journal of Robotics, vol. 2017, Article ID 1646095, 2017.

[4] R. R. Murphy, J. Kravitz, S. L. Stover, and R. Shoureshi, "Mobile robots in mine rescue and recovery," IEEE Robotics and Automation Magazine, vol. 16, no. 2, pp. 91-103, 2009.

[5] S. Thrun, S. Thayer, W. Whittaker et al., "Autonomous exploration and mapping of abandoned mines," IEEE Robotics and Automation Magazine, vol. 11, no. 4, pp. 79-91, 2005.

[6] P. Deshpande, V. R. Reddy, A. Saha, K. Vaiapury, K. Dewangan, and R. Dasgupta, "A next generation mobile robot with multimode sense of 3D perception," in Proceedings of the 17th International Conference on Advanced Robotics, ICAR 2015, pp. 382-387, Istanbul, Turkey, July 2015.

[7] M. Pfingsthorn, M. Slamet, and A. Visser, A Scalable Hybrid Multi-robot SLAM Method for Highly Detailed Maps, Springer, Berlin, Germany, 2008.

[8] S. Li, R. Kong, and Y. Guo, "Cooperative distributed source seeking by multiple robots: Algorithms and experiments," IEEE/ASME Transactions on Mechatronics, vol. 19, no. 6, pp. 1810-1820, 2014.

[9] N. Atanasov, J. Le Ny, K. Daniilidis, and G. J. Pappas, "Decentralized active information acquisition: Theory and application to multi-robot SLAM," in Proceedings of the 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, pp. 4775-4782, Seattle, Wash, USA, May 2015.

[10] N. Milisavljevic, Sensor and Data Fusion, I-Tech Education and Publishing, Vienna, Austria, 2009.

[11] S. Saeedi, L. Paull, M. Trentini, and H. Li, "Occupancy grid map merging for multiple robot simultaneous localization and mapping," International Journal of Robotics and Automation, vol. 30, no. 2, pp. 149-157, 2015.

[12] S. Thrun, D. Hahnel, D. Ferguson et al., "A system for volumetric robotic mapping of abandoned mines," in Proceedings of the 2003 IEEE International Conference on Robotics and Automation, pp. 4270-4275, Taipei, Taiwan, September 2003.

[13] X. S. Zhou and S. I. Roumeliotis, "Multi-robot SLAM with unknown initial correspondence: The robot rendezvous case," in Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006, pp. 1785-1792, Beijing, China, October 2006.

[14] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, "RGBD mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments," International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663, 2012.

[15] B. Olofsson, J. Antonsson, H. G. Kortier, B. Bernhardsson, A. Robertsson, and R. Johansson, "Sensor Fusion for Robotic Workspace State Estimation," IEEE/ASME Transactions on Mechatronics, vol. 21, no. 5, pp. 2236-2248, 2016.

[16] J. Ma, S. Susca, M. Bajracharya, L. Matthies, M. Malchano, and D. Wooden, "Robust multi-sensor, day/night 6-DOF pose estimation for a dynamic legged vehicle in GPS-denied environments," in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 619-626, Saint Paul, Minn, USA, 2012.

[17] D. Wooden, M. Malchano, K. Blankespoor, A. Howardy, A. A. Rizzi, and M. Raibert, "Autonomous navigation for BigDog," in Proceedings of the 2010 IEEE International Conference on Robotics and Automation, ICRA 2010, pp. 4736-4741, Anchorage, Alaska, USA, May 2010.

[18] M. Bajracharya, J. Ma, M. Malchano, A. Perkins, A. A. Rizzi, and L. Matthies, "High fidelity day/night stereo mapping with vegetation and negative obstacle detection for vision-in-the-loop walking," in Proceedings of the 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013, pp. 3663-3670, Tokyo, Japan, November 2013.

[19] D. Silver, D. Ferguson, A. Morris, and S. Thayer, "Feature extraction for topological mine maps," in Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), pp. 773-779, Sendai, Japan.

[20] C. Brand, M. J. Schuster, H. Hirschmuller, and M. Suppa, "Stereo-vision based obstacle mapping for indoor/outdoor SLAM," in Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, pp. 1846-1853, Chicago, Ill, USA, September 2014.

[21] S. Y. Chen, "Kalman filter for robot vision: a survey," IEEE Transactions on Industrial Electronics, vol. 59, no. 11, pp. 4409-4420, 2012.

[22] T. Bailey and H. Durrant-Whyte, "Simultaneous localization and mapping (SLAM): part II," IEEE Robotics and Automation Magazine, vol. 13, no. 3, pp. 108-117, 2006.

[23] A. Assa and F. Janabi-Sharifi, "Virtual visual servoing for multicamera pose estimation," IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 789-798, 2015.

[24] M. Liu, F. Colas, L. Oth, and R. Siegwart, "Incremental topological segmentation for semi-structured environments using discretized GVG," Autonomous Robots, vol. 38, no. 2, pp. 143-160, 2014.

[25] P. Kriisi, B. Bucheler, F. Pomerleau, U. Schwesinger, R. Siegwart, and P. Furgale, "Lighting-invariant adaptive route following using iterative closest point matching," Journal of Field Robotics, vol. 32, no. 4, pp. 534-564, 2015.

[26] H. Durrant-Whyte and T. Bailey, "Simultaneous localization and mapping: Part I," IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.

[27] F. Aghiliand C.-Y. Su, "Robust relative navigation by integration ofICP and adaptive Kalman filter using laser scanner and IMU," IEEE/ASME Transactions on Mechatronics, vol. 21, no. 4, pp. 2015-2026, 2016.

[28] L. Wang, M. Liu, and Q. H. Meng, "Real-Time Multisensor Data Retrieval for Cloud Robotic Systems," IEEE Transactions on Automation Science & Engineering, vol. 12, no. 2, pp. 507-518, 2015.

[29] G. Panahandeh and M. Jansson, "Vision-aided inertial navigation based on ground plane feature detection," IEEE/ASME Transactions on Mechatronics, vol. 19, no. 4, pp. 1206-1215, 2014.

[30] L. Marin, M. Valles, A. Soriano, A. Valera, and P. Albertos, "Event-based localization in ackermann steering limited resource mobile robots," IEEE/ASME Transactions on Mechatronics, vol. 19, no. 4, pp. 1171-1182, 2014.

[31] R. S. Blum and Z. Liu, Multi-Sensor Image Fusion and Its Applications, Taylor & Francis Group, LLC, NewYork, NY, USA, 2006.

[32] R. Reid, A. Cann, C. Meiklejohn, L. Poli, A. Boeing, and T. Braunl, "Cooperative multi-robot navigation, exploration, mapping and object detection with ROS," in Proceedings of the 2013 IEEE Intelligent Vehicles Symposium, IEEE IV 2013, pp. 1083-1088, Gold Coast, Australia, June 2013.

[33] B. Pappas, Multi-Robot Frontier Based Map Coverage Using the ROS Environment [Ph.D. thesis], Graduate Faculty of Auburn University, Auburn, Ala, USA, 2014.

Hongling Wang, (1) Chengjin Zhang (iD), (1) Yong Song (iD), (2) and Bao Pang (1)

(1) School of Control Science and Engineering, Shandong University, Jinan 250101, China

(2) School of Mechanical, Electrical and Information Engineering, Shandong University at Weihai, Weihai 264209, China

Correspondence should be addressed to Chengjin Zhang; cjzhang@sdu.edu.cn

Received 28 June 2017; Revised 29 November 2017; Accepted 19 December 2017; Published 1 February 2018

Academic Editor: Shahram Payandeh

Caption: Figure 1: Multirobots and multisensors integration architecture.

Caption: Figure 2: Sensors measurements data fusion: (a) is the laser sensor 1 trajectory, (b) is laser sensor 2 trajectory, (c) is laser sensor 3 trajectory, and (d) is all the three laser sensors' measurements fused into an integrated trajectory; the red lines represent true exploring trajectory of mobile robot, the green dash dotted lines represent the observed exploring trajectory, and the blue dash dotted lines indicate fusion trajectory of measurements data using information algorithms.

Caption: Figure 3: Mobile robot's trajectory fusion and error analysis: (a) is fusion trajectory (red solid represents true exploration trajectory of mobile robot, and the blue dash dotted line shows the fusion trajectory by our proposed algorithm); (b) is sensor 1 error analysis, (c) is sensor 2 error analysis, and (d) is sensor 3 error analysis (each red line indicates the mean value).

Caption: Figure 4: The comparison of two fusion methods error analysis: (a) is trajectories fusion error analysis, (b) is measurements data fusion error analysis, in which each red line represents error mean value, and blue curves indicate error biases relative to error mean values.

Caption: Figure 5: A virtue mobile robot performs information-fusion SLAM with LRF sensors, sonars, and gyro odometry; the red trajectories in (a)-(f) represent a loop-closure exploration.

Caption: Figure 6: A Pioneer LX robot runs information-fusion SLAM in an experimental cluttered indoor environment. (a) The mobile robot installed LRF sensors, sonars, and gyro odometry is exploring a cluttered environment. (b) The initial metric map. (c) The generated updated map of this environment. the blue and yellow blocks marked are the objects of interest.

Caption: Figure 7: Information-fusion of Kinect and LRF sensors applies to SLAM in a collapsed scene. (a) A crawler mobile robot equipped with the Kinect and laser sensors. (b) The RGB-D image observed from the Kinect while the robot is conducting information-fusion SLAM task.

Caption: Figure 8: A robot SLAM process observed from remote PC control center. (a) The robot is exploring and locating itself with a preinstalled metric map. (b) Map processing with achieved multisensors' information. (c) The updated complete map of exploration site with marked objects (i.e., the blue blocks (A), (B), (C), (E), and (F)) of interest.

Caption: Figure 9: The deviation of mobile robot trajectories and generated imprecise map. (a) Comparing with Figure 8(c). (b) Comparing with Figure 6(c).

Table 1: A part of measurements data set of fusion multisensors achieved from robot SLAM process. xdirection y direction Orientation coordinates coordinates angles (rad) (m) (m) 1.08 6.066 -0.9 0.475 5.095 179.9 -2.923 4.529 -55.3 -1.746 1.633 -80.89 0.415 0.517 -58.51 0.015 -0.185 128.6 -0.992 2.223 99.81 -0.840 3.179 42.19 0.454 3.132 -5.76 1.144 3.142 -5.71 1.616 3.204 4.07 3.245 3.21 3.90 Table 2: The comparison errors between mobile robot's trajectories and two approaches of fusion trajectory. Different Single-sensor Measurements fusion Trajectories fusion sensors error mean (cm) error mean (cm) error mean (cm) Sensor 1 0.6301 Sensor 2 0.8131 0.4125 0.4162 Sensor 3 0.7878

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Wang, Hongling; Zhang, Chengjin; Song, Yong; Pang, Bao |

Publication: | Journal of Robotics |

Date: | Jan 1, 2018 |

Words: | 6685 |

Previous Article: | Human-Robot Interaction. |

Next Article: | Hands-Free Maneuvers of Robotic Vehicles via Human Intentions Understanding Using Wearable Sensing. |

Topics: |