Printer Friendly

Toward Dynamic Monitoring and Suppressing Uncertainty in Wildfire by Multiple Unmanned Air Vehicle System.

1. Introduction

In any region undergoing some form of environmental distress, it is very important to detect changes occurring on the ground. In some cases, the environmental incident has spatial changes, and the incident spreads steadily. In other cases, it becomes difficult to follow the incident without knowing how it is evolving. Having a system that follows the event helps rescue human lives, monitor the incident, and allow the human responders to take better actions (as well as deploy assets in an optimal manner to mitigate the incident).

It is of great importance to monitor and respond to natural phenomena (e.g., fires) and national security disasters (e.g., emitting source). One needs to be able to explore a wide area and search for the source of hazardous substance emissions or the expansion of a fire front. In 2016 alone, Federal agencies reported 67,595 fires and an estimated cost of fire suppression of approximately 2 billion U.S. dollars [1]. In addition to financial loss and significant damage to the environment, wildfires threaten the lives of firefighters and civilians during these fire extinguishing operations [2].

There are two main reasons why a solution to fire tracking needs to be found. The first relates to modeling; it is very challenging to predict the fire frontier as a stochastic phenomenon dependent on weather conditions, terrain, and fuel (flammable materials) [3].

Secondly, operational aspects are exposed to severe limitations and constraints. The resources to respond to and monitor disasters are still quite limited. In the aviation section of the National Interagency Fire Center's annual summary of wildfire activity in 2012, there were many requests for large air tankers, which were Unable To be Filled (UTF). The number of cases of firefighters needing air tankers that were unavailable reached a high of 48 percent in 2012 [4] (This means that, in 2012, almost half of all requests for tankers to bomb fires were unanswered due to limited resources).

An incident with a dangerous spread area requires immediate exploration. Some examples are distributed fire spots and chemical threats; however, there are many others. This type of scenario requires surveillance to search for threats, but human observers are difficult to deploy because the task is dull, dirty, and/or dangerous. Wildfire monitoring missions are a perfect example of why a solution needs to be developed. Wildfires (and all natural and national threats phenomena) require urgent attention and an efficient response to monitor and contain their spread.

Figure 1 shows an example of a progression map of a wildfire incident. The periphery grows in time, and the boundary of the wildfire spreads. Knowing that the terrain is a dominant factor helps to understand the outcome in relation to the vegetation type, and knowing the actual weather explains the direction of propagation. The incident last for almost two days and spread out over a distance of 2km. It points to the necessity for a real-time monitoring system.

1.1. Existing Monitoring Systems. Previous studies have examined two different solutions: one on the ground and one in the air. In the first case, ground vehicles are used to explore the area. Use of ground vehicles depends on how passable is the area which needs to be explored. Ground vehicle capability is not necessarily suitable for scenarios with difficult terrain. In the second case, the deployment is in the air, the motion is smooth, and the area can be observed much more efficiently. In addition, most of those scenarios have a critical time factor. The systems phenomena are dealing with time, and because the existing system capabilities are limited, they cannot collect all spatial/temporal information at once. Whenever the observing vehicle is positioned in any one place, the system necessarily misses events in other places within the search area.

Ref. [6] describes existing projects that support disaster management in real time mainly systems that are space-based (e.g., GlobalStar), or high altitude and long duration (e.g., Global Hawk). The projects reinforce the importance of tracking events like floods and earthquakes, and how the tracking events help to monitor the incidents and handle them effectively from the ground control segment.

Ref. [7] describes remote sensing techniques and sensor packages that have been used on UAVs (Unmanned Air Vehicles). The author argues that these techniques can serve as the main solution for various disasters. Reviewing the literature on the development of UAVs, including projects with different types of sensors (IR/Visual) and platforms (fixed-wing/rotary-wing), he concludes that Multi-UAVs can be used to avoid the drawbacks of approaches that are based on either land or space.

1.2. Multi-UAV. The ideal mission for a UAV is monitoring a wide area and searching for the source of emission of a hazardous substance or expansion of a fire front. There has been a rising interest in increasing UAV efficiency and reliability. Autonomous vehicles have been used in a variety of applications for surveillance, search, and exploration [8]. In surveillance problems, the target space needs to be surveyed continuously. It differs from a coverage problem, which involves optimal deployment of the sensors for complete coverage of the target area. It also differs from an exploration problem, which deals with gaining more information about the bounded area [9]. This exploration research moves in two directions. The first focuses on how to pinpoint the source of an odor [10,11]. In this area of research, robots are tasked with detecting, tracking and seeking the odor source efficiently.

The second direction of this exploration research focuses on how to establish the boundary or perimeter of a spreading phenomenon in order to monitor the area and prevent human exposure to risk [12]. Because of the spatial limitations of a single UAV, most research currently focuses on how to monitor large areas by operating multiple inexpensive simple UAVs simultaneously [13].

Though the studies mentioned above are significant, they focus on exploring the environment using clues (e.g., aerosol diffusion) for tracing emission sources. Moreover, the techniques used to detect the plume or periphery are strongly dependent on the spatial gradient change of the underlying tracked phenomenon. The research presented in our work proposes to explore the area by using approximate inference methods [14] and statistical reasoning [15]. The developed method takes into consideration the operational aspect of the mission in addition to the statistical characteristics of the underlying phenomenon.

1.3. Coordination. Most of the multi-UAV systems are designed to address problems related to specific research in a particular environment of interest. The UAVs cooperate and share data to obtain information on a certain aspect of the environment. Regardless of the number of UAVs and size of the AOI (Area Of Interest), cooperative systems deliver an improved overall picture of the environment through coordination.

The design of cooperative systems mainly discusses the control strategy (e.g., centralized or decentralized) and the level of autonomy. Framework design inspired by a biological system has been a popular concept of research for some time (to name a few [16-18]). Nevertheless, achieving such complexity through control techniques is considerably challenging.

There are many studies on multi-UAV cooperative control systems that address coordination issues. These focus on designing a system to control and monitor a region. One of the earliest studies proposed using aerial photographs to monitor fires in order to combat them [19]. The objective was to use aerial photographs to map the fire and then coordinate the team on the ground. In the past few years, the literature has included more and more research of systems utilizing a team of small cooperating UAVs to get better surveillance; that is, better response time in missions where time is critical.

Recent studies have focused on special missions that can be efficiently performed with multi-UAV systems. Some address the problems of formation flight and some the problems of coordination. Fewer studies have been done on reconfiguring the coordination [20] or on coordination where the assigned tasks have uncertainty. This paper demonstrates that if the guidance system accounts for realtime events and is able to adjust the flight formation to incorporate changes, then the trajectories are more effective than traditional methods.

Closely related is the work that has been done on multiUAV coordination for tracking missions for search and rescue or surveillance [21]. It presents a concept that relies on low altitude and short endurance (UAVs). The work explores tracking a fire line by using a team of UAVs following the perimeter of the wildfire area. The UAVs return periodically to the ground station location for downloading the collected data. The research focused on how to minimize the latency associated with the fire perimeter measurement when it is transferred to the ground station.

In [22], the design includes a coordination scheme to control a rotary-wing platform (Quadrotor) for a similar mission to the one above. Essentially the motion of the UAV patrols the propagating perimeter. Whenever one UAV approaches another UAV (rendezvous point), deconflict the rendezvous and resolve each UAV next flight direction. That research assumes, however, that the perimeter of the fire is circular. These studies (and similar ones) examine a specific scenario where the focus is on directly tracking the periphery point. This is limited, however, by focusing on the connection between the uncertainty of the spreading perimeter and the maneuverability of the fleet needed to maintain knowledge of the complete perimeter.

1.4. Observation. In coordination, one of the basic operations is observation sharing. Most of the recent studies in multi-UAV address the problem of partial information. It reflects the "real-world" problem where the UAV has limited communications (range or bandwidth). One UAV can communicate with one that is close by, but not with another that is far away. Ref. [23] presents a variety of research problems in which multivehicle systems agree on the value of observed data (consensus), and explores control strategies and a set of solutions for implementing them. Ref. [24] includes a chapter that suggests various deployment algorithms. They consider a distributed algorithm to address the physical limitations of the communication system for observation sharing.

If a coordination algorithm for an environment with uncertainty is available, the overall system still relies on individual sensing capabilities. Even if the system uses the best or most advanced sensors, the sensors can be restricted by environmental conditions, i.e., the sensors carried by the UAV do not have sufficient range [25], and the data measured can only be local and quantized.

The inefficiency of current systems with high-level-control creates significant timing difficulties for achieving the mission objectives. The ongoing mission can leave one vehicle loitering, resulting in a high latency of updates. Based on different studies [21], this represents a large time loss during a mission, with fewer updates, which in some cases can cause the mission to fail in its tracking objective.

1.5. Propagated Periphery Modeling. Disaster growth models, which predict the spatial and temporal dynamic spread rate, may help in evaluating the situation and deciding on a suitable response in a real-time deployment [3]. Appropriate representation and estimation of the spatial uncertainty can improve the prediction or help in developing a simplified model [26]. A mission with an uncertainty model for the AOI stands to benefit substantially from the predicted confidence envelope approach. For example, in expected high rate of spread (RoS) segments along the AOI perimeter, the allocation can use the availability and priority of the segment to get better results than if it were to assume that all segments along the perimeter are identical. Available UAVs can be redirected to new areas instead of merely loitering.

In one of the biggest wildfire research projects done by the Joint Fire Science Program, the researchers developed fire behavior models for operational use. Their main objective was to develop a detailed dynamic model to predict the physical behavior of the ground phenomenon. They considered two simple fire modeling approaches. In both models, the assumption was that the local spread at a point on the perimeter is perpendicular to the fire perimeter into an unburned environment and that the fire has a local RoS normal to the fire line.

2. Problem Definition

This research proposes a system design and implementation for quick deployment of a low-cost, low-power fleet of UAVs with a high-level ground control system. The results of this research project introduce the new methods which can serve as high-level-control for operational multiagent systems: a method to estimate a propagated boundary [5] and a scheme for optimal deployment of a fleet of agents in an exploration mission [27].

The problem is one of optimization with respect to time with sparse measurements detected by a fleet of UAVs. The UAVs have a dynamic process to monitor, as quickly as possible, a periphery represented by a set of Control Points (CPs). The complete system design considers the uncertainty of the bounded phenomenon, where each UAV fleet member carries an on-board sensor to distinguish between inside and outside areas.

Figure 2 illustrates the approach taken to represent the boundary with a set of CPs connected by straight lines. Each CP has a nominal spread rate that is considered relative to the origin point of the propagated phenomenon. That is, the spread rate is always pointed outward. The information is being gathered by a UAV to provide the observations that are noted as IN or OUT relative to the enclosed periphery. The optimum policy is derived from the decision of which CP the UAV should approach first to reduce uncertainty.

3. Periphery Estimation Methods

The estimation methods approach in this research project relies on an earlier developed technique for estimation of propagated boundary with quantized measurements [5]. The monitoring system involves large numbers of possibly randomly distributed inexpensive sensors, with limited sensing and processing. The estimator incorporates observations gathered by multiple observers and uses the Quantized Kalman Filter (QKF) estimation method [28] to update the expected location and unobserved spread rate. This technique has been extended and laid out the groundwork for the Greedy Uncertainty Suppression (GUS) strategy [27].

The estimation is meaningless in a situation where the available sensors are located inefficiently (e.g., considerably far or colocated). The GUS strategy searches for trajectories that improve the estimated boundary of a propagated phenomenon. The methodology designed for on-line look-head approaches to rerouting the UAVs is computationally intractable [8]. To improve the performance, even more, one can use the new approach to further suppress uncertainty. The UAV trajectory is changed to continuously reduce the uncertainty in the biggest covariance among all CPs, by flying directly to the tip of the major axis of that ellipse. Figure 3 illustrates the basic concept for reducing the uncertainty autonomously. Each associated uncertainty is represented as an ellipse (of 95% of confidence area). In previous work, it has been observed that reducing uncertainty is related to measuring distance as well as the arbitrary approach angle direction to an arbitrary analyzed CP. Moreover, uncertainty depends on measurements availability. Hence uncertainty grows with time when no significant observations have been incorporated. The UAV can approach a CP along the direction of its maximal uncertainty axis (direction of major axis of the covariance) and reduce the one-dimensional uncertainty. Interestingly the observed property affecting the uncertainty of CP is the line-of-sight direction.

4. Guidance Logic

4.1. Overview. The trajectory addresses the monitoring problem by adopting common principles. The first is a book-keeping. The mission planner keeps track of representative quantities of interest. Reference [29] suggests discretizing the target space. Space gridding is typically performed after initial space decomposition which divides the target space into subspaces and later assigns them to different UAVs.

The second fundamental principle is the mission plan. The method considers the UAV deployment stage and preplanned search patterns (e.g., zigzag, spiral, and alternating) in case there is a need to switch back. The third principle is the replanning which adjusts the uncertainty models with observations.

The suggested strategy considers the perpendicular direction of the uncertainty major axis as the next search direction. The periphery is divided into segments based on the number of UAVs and the resolution of the periphery (number of CPs). The planner chooses important CPs which are distributed along the perimeter. Assigning a UAV to one of the CPs will influence the others and reduce the uncertainty accordingly.

Deploying the UAVs is based on the number of available resources. For example, for two UAVs the deployment is to two segments of the predicted polygon where one UAV is assigned to the highest look-ahead uncertainty (weighted by variance and time-to-go) and the second direction is the highest uncertainty on the remaining segments.

Three benefits are gained from that allocation policy: First, the solution avoids flyby trajectories and potential collision. Second, UAVs are not allocated to the same or a close area. Third, the trajectories are being evaluated for dynamic motion feasibility to be carried out by the assigned UAV.

4.2. Greedy Uncertainty Suppression (GUS). GUS strategy tends to minimize the maximum uncertainty over all CPs by incorporating observations over a long period. The policy achieves longer look-ahead with an on-line rerouting logic for the fleet members' task.

The implementation includes two main parts; coordination and allocation. The basic operation leads to coordination involves sharing information of the assigned tasks. The UAVs share their observations with a centralized entity. The observations incorporated sequentially in the estimation process. GUS algorithm is a step by step procedure to determine the best task to each UAV. The notation uses superscript j to label UAV and i as an index of an arbitrary CP.

The first step relies on a previously developed algorithm [5] (QKF). This procedure includes system coordinates transformation, scalar probability evaluation, and Kalman Filter to estimate the state [mathematical expression not reproducible] and covariance [mathematical expression not reproducible] of the CPs in the original coordinates frame.

The following steps determined the new policy. Step 2, sort the CPs estimation by their major variances. After correcting the state [mathematical expression not reproducible] and updating the covariances [mathematical expression not reproducible] the procedure continues to evaluate the major-axes of the projected uncertainties. By sorting the variances and adjusting the waypoints along the compass line the algorithm generates candidate destinations.

To evaluate all the alternatives, Step 3, run Dubins Vehicle algorithm which also provides the length of feasible trajectories and evaluates the time-to-go for each UAV. The associated trajectory for UAVj weighted by the cost function and step 4 assigns the UAV to its best feasible task. The weights of the cost function address the need to consider additional restrictions (for example, deploying the UAVs to one side of the periphery) or tasks.

The propagation of the error between predicted and actual boundary can increase with no control. An additional task assigned where UAV had not cross the actual boundary for a long duration. The policy includes a special allocation mode, rerouting toward the origin point and searches for a crossing point. When that step is done the algorithm is back to the default allocation mode.

5. Monitoring System Architecture

5.1. Background. For many experimental and operational applications, UAVs can enable or enhance the efforts available to researchers or operational teams. Much work has been done to make UAVs useful in a myriad of scenarios. In some scenarios, operating in the environment requires special skills or training that operational teams do not have; here an autonomous system can enable access that was previously difficult to obtain. In recent years, there has been a rapidly increasing interest in UAVs where the operational problem requires an airborne platform.

Technological progress has made it possible to use inexpensive autopilots on small UAVs. The development of highdensity batteries, long-range and low-power radios, cheap airframes, high-performance microprocessors, and powerful electrical motors all make experimental research or operational team with UAVs more practical than ever [30]. The availability of UAVs as a fast deployable resource allows teams to explore many new kinds of scenarios such as wildfire. The flexibility of the system design further allows for quick changes, reducing the project workload.

A modern UAV system consists of an on-board control system (i.e., autopilot) and Ground Control Station (GCS). The autopilot utilizes various sensors, communication modules, a power supply unit, and embedded software to control the UAV. The autopilot software is the real-time implementation of the guidance, navigation and control algorithm; one of the demands on designing a rapid prototype testbed is to enable control algorithms, discussed briefly in [31].

Autopilots control and guide the UAVs in flight. They rely on data gathered by various sensors and on a central processing unit (CPU), which carries out the instructions of the program. The objective of an autopilot system is to consistently guide the UAVs to follow reference paths or navigate through several waypoints. A UAV autopilot system is a closed-loop control system consisting of two parts: the state observer and the controller. A typical observer is designed to estimate the state (e.g., attitude) from sensor measurements (gyro, accel); advanced control techniques are used in the UAV autopilot systems to guarantee smooth, desirable trajectory navigation.

This paper focuses on the design of a multi-UAV system that is used in this research project and future projects. The emphasis is on the need for multi-UAV coordination and high-level-control. Ref. [31] provides a review of the existing autopilot and the migration process from the previous successful rapid prototyping concept to a new design.

SLUGS (Santa Cruz Low-cost Unmanned Aerial Vehicle Guidance, Navigation & Control System) is a platform that includes autopilot software and hardware components that enable a flexible environment for research in GNC applications [32]. The SLUGS was designed primarily for GNC research, and it has already been used in many flight tests. It is also part of the experimentation with fixed-wing UAV systems, presented in the following section.

The SLUGS II design improves on previous SLUGS design because it provides rapid prototyping control for multi-UAV systems. The control design process is made up of many iterations that can be verified and validated through both simulation in the Simulink environment and with autocode generation.

5.2. Software Design. The complete autopilot algorithm is implemented in Simulink using block diagrams and Matlab toolboxes (MPLAB X--Microchip Integrated Development Environment (IDE), and dsPIC--digital signal controllers support for Embedded Coder). Simulink blocks and Matlab routines are effective software that can be used to modify the algorithm and verify the design. Once the model is updated in the Simulink environment, it then generates the new code with the updated features. The R&D work in a model based environment makes the programming phase easier. Simulink includes tools that automatically generate and compile the code. The code is then deployed directly to the autopilot hardware [33].

SLUGS II modifies the design process to add a verification step for the generated code in a flexible and friendly environment that is committed to the sequence of events in software rather than to guarantee strong real-time performance execution of the code. Figure 4 demonstrates the code generation process and the design validation is discussed in detail in the implementation section.

The software design, as presented in Figure 5, introduces the first order constraints of a dynamical system where the vehicles are mobile and the environment domain changes.

The models are software oriented implementations in which the execution processes guarantees to reconstruct the outcome; hence, the process is deterministic. The UAV model in the software is a mathematical representation of the actual motion of nonholonomic systems.

The propagation model helps anticipate the boundary location in time. The developed model represents the firefront propagation of a wildland fire. The implementation of this model is a greedy evaluation; each point out of a set of grid points along the periphery is evaluated. Wind velocity and ground slope are incorporated into the propagation model. The model combines the wind velocity and ground slope directly to each grid point of the boundary.

5.3. Hardware Design. The literature on COTS autopilots suggests that the minimum requirements for a research autopilot are robustness and attitude accuracy, enough for low altitude flight surveillance. Hardware must include sensors on-board and software for an attitude solution [34]. This hardware design makes an important contribution to the research framework because it introduces a new design. The SLUGS embedded system features two Microchip dsPIC33F microcontrollers. That design allows SLUGS to implement more complex and effective Guidance, Navigation and Control (GNC) algorithms. It provides a high level of safety and fault tolerance features, and it is designed such that the autopilot system would have more than enough processing power. However, it means more maintenance for the research autopilot Integrated Development Environment (IDE), and increased cost.

SLUGS II simplifies the existing design by using on a reliable commercial-off-the-shelf (COTS) hardware. The AUAV3 is a commercial open-hardware development board (all PCB layouts are provided) [35]. It features a single Microchip dsPIC33EP with twice the clock rate of the dsPIC33F. The AUAV3 board (see Figure 6) comprises peripheral circuits for IMU, Magnetometer, Barometer and the standard communication interfaces (SPI, CAN, UART and I C). Researchers have examined using the AUAV3 to replace the in-house SLUGS hardware. Ref. [31] discusses the SLUGS II Simulink model migration process in more detail, and the steps for verifying and validating the performance with a series of flight tests.

The design of SLUGS II is such that it can be adapted to various different scales. It provides a solution for multiple UAVs in the testing environment when they are needed for research or development. The major challenges of research in multi-UAV are handling duplicate systems with low maintenance cost, reliability, and researchers' insufficient skills. The UAV is linked with continually changing technology so that new infrastructure needs to be assessed and adopted in order to improve the existing system. The system design can help with this maintenance by enhancing the new R&D autopilot environment, eliminating the need to maintain multiple platforms can reduce overhead costs and development difficulties.

Figure 7 presents the basic hardware configuration per any single UAV employed by the monitoring system. The AUAV3 autopilot controls the GPS receiver, telemetry (recorded by a logger and transmitted to the GCS), remote-control (RC) inputs for a pilot safe mode and radio transceivers (3DR Radio) for manual and autonomous flight modes.

6. Monitoring System Implementation

6.1. SLUGS II Components. The AUAV3 addresses the issue of the skills needed to develop or maintain in-house hardware. Commercial hardware is constantly being updated, and for the R&D autopilot, this is an opportunity to put all the efforts into developing GNC algorithms and utilizing low-cost COTS hardware. The old hardware is difficult to integrate with newer sensors and sensing technology. Complex applications require a flexible and adaptive R&D autopilot to keep up with a dynamic environment.

The UAV model integrated within the Simulink development model is another challenging component. It needs specialized skills to tune and adjust to different platforms. Porting the X-Plane simulator improves the development effort and further reduces costs. Different airplane models can be found on the local simulator database instead of tuning the aerodynamic coefficients of a six-degree-of-freedom (6DOF) model by hand.

Two components are migrated as part of SLUGS II design. The benchmark configuration takes the MatrixPilot open-source autopilot and deploys the code on the AUAV3 board. Performance benchmarking ensures that the migrated components perform as well as or better than the old components. The new configuration is then evaluated in multi-UAV software in the loop (MSIL) simulation and in real flight tests.

Once the assessment of the AUAV3 board is completed, the Simulink model is then modified. The model adjusts to the new dsPic configuration. This integration phase includes eliminating the blocks that handle communication between the separate processors, improving the modeling style, optimization, removing dead code, and identifying incompatible porting issues. Configuring the Simulink model to the new AUAV3 board is based on the Microchip dsPIC toolbox (a new revision of the Lubins Blockset [36]). Although the complete process requires significant manual work, the main intellectual property (IP) of the R&D autopilot remains almost untouched.

In the final phase, the newly migrated autopilot is subjected to rigorous testing using test cases applied to the original design (SLUGS) and MatrixPilot [37]. Apart from the functional load testing, testing is carried out to ensure that the necessary performance level is achieved. The migrated autogenerated code is deployed, and parameters are fine-tuned for the new airframe (BixlerII).

6.2. Ground Control Station. The GCS is one of the most important components in a UAV system. It provides an operational interface to monitor and control the assigned task to the multiple UAVs. It presents any additional information that does not require the autopilot to complete its task; however, it supports the user who monitors the mission to coordinate with other systems for better decision making. The GCS includes indications for the mission showing the relevant spatial data (i.e., geodetic coordinates) associated with the map of the area of interest (see Figure 8).

The GCS communicate with the UAVs using a bidirectional data link (X-Bees transceivers). It runs on a mobile laptop computer that can easily be transported to the test site.

A complete process that supports a multi-UAV configuration is needed to be considered by the autopilot system for real-time identification and task allocation. To support a multi-UAV configuration, the SLUGS II design extended the tools for software verification. The multi-UAV IDE offers code verification with complete software in the loop (SIL) simulation.

6.3. Multi-UAV Software in the Loop. MSIL simulation is a higher level of fidelity for the final steps of developing the high-level-controller. MSIL simulation allows running the SLUGS II research autopilot on a computer before running it on the target processor. It communicates with a simulator for simulating high-fidelity flight dynamics (X-Plane). The MSIL simulation is meant to run a single or multi-UAV configuration and support the external interfaces and built-in internal calls (for example, memory, timing and peripheral libraries) of every instance of the SLUGS II autopilot code.

The MSIL software includes the generated code, which is compiled together with a handling layer (real-time wrapper software). The RT Wrapper interfaces with the external software through a User Datagram Protocol (UDP) socket or a serial port. The MSIL simulation controls the simulated GPS, telemetry and remote-control (RC) inputs for a real RC controller (training mode). The autopilot researcher benefits from the ease of integrating the original generated code and having an easy, friendly environment for debugging.

The GCS unit controls the UAVs through a communication bridge to ensure two-way communication between the GCS and the SLUGS II autopilot. The autopilot can directly manage information from the serial port (or in case of MSIL from the buffer of the serial port). The RT Wrapper (in Figure 9) is responsible for managing the buffers and for distributing the MAVlink messages between real UAVs or simulated modules.

The coordination algorithm is executed in Matlab and works as an extension of the GCS. The RT Wrapper creates a tunnel between Matlab and the SLUGS II software through a physical communication link (UDP) using the MAVlink protocol.

6.4. Multi- UAV Hardware in the Loop. The Multi-UAV hardware In the Loop (MHIL) simulation runs the SLUGS II software stack on the AUAV3 flight controller using raw sensor data fed in from the simulated environment running on the desktop PC. HIL simulation replaces the UAV and the environment with a simulator (the simulator has a high-fidelity aircraft dynamics model and environment model for wind, turbulence, etc.) The physical autopilot hardware (AUAV3) is configured exactly as for flight and connects to a computer running the simulator rather than the aircraft. In this sense, the AUAV3 does not know it is flying a simulation.

Figure 10 shows the MHIL setup. The involved units in the MHIL configuration are depicted along with their associated interfaces. The AUAV3 and the GCS are connected physically by a telemetry link. The autopilot is connected to a computer running the simulator. The simulator is fed by the servo commands and responds with sensory values from the simulated airplane model. The generated sensor values are similar to the IMU output and injected to the navigation algorithm as the UAV autopilot flies the high-fidelity flight situation.

In the end, all of the various functionalities must work both as individual subsystems, but also integrated as part of the entire system: experiment with the UAV design, the basic multi-UAV flight formation, and the monitoring system control. Each one is a step in validating the complete system design which addresses the full multi-UAV monitoring problem.

The system architecture can be utilized in a centralized or a decentralized scheme of operation to enable coordination and information sharing. In a centralized system configuration, the UAVs relay real-time information between each other through the GCS. Alternatively, the UAVs could transmit real-time information between group members (a decentralized scheme configuration).

7. Simulation Results

7.1. Periphery Estimation Evaluation. The simulation is designed to evaluate all major components which involve in the GUS strategy. The environment conditions are being simulated based on a model of a propagated wildfire with a random and bounded spread rate (3 [+ or -] 0.1 [m/sec]). UAVs allocation is being implemented in a separate component which incorporates the observations gathered by the simulated UAVs.

The UAV dynamics model subject to constant-speed of 20[m/sec] which is the approximated speed of the platform that has been developed and examined for the experimental stage of this research. Moreover, the centralized controller comprises the QKF estimator which fuses the observations, and it is based on a previously derived technique. The following simulated scenarios explored the efficiency of the suggested concept.

The initial setup attempts to adhere to the real problem, and therefore a real-time data received from the CAL FIRE (San Mateo Santa Cruz Unit for the Martin Incident) is used. For example, the initial AOI is large (1 Km x 1 Km) and the time scale is long (i.e., hours). Figure 11 shows the actual periphery with two UAVs deployed from both sides of it.

The propagation model used in the simulation is simplified. However, it allows investigating the major properties of fire spreading. The dynamic expansion of the boundary, the environmental effects (i.e., wind and slope) and the feasibility are all considered in the implementation and are utilized for different scenarios.

Figure 12 demonstrates the scenario with running GUS. A local error bar represents the uncertainty of each CP. The size of an error bar is correlated with the size of the perpendicular and tangential variances.

The performance measure offered in [5] accounts for two performance indicators: errors and uncertainty. The errors indicator comprised the mean-square-error, where the errors are between the predicted and the actual periphery. The uncertainty indicator is simply taking the mean of the CPs' major variances. Both indicators are weighted equally in the combined performance measure, [mathematical expression not reproducible]. In the figures, uncertainty is represented by an error bar in the global coordinates system, and the performances are evaluated relative to the perpendicular component of the local predicted periphery.

Figure 13 shows the combined performance measure with its two performance indicators.

If there were no errors and no uncertainty, then the traditional periphery tracking was an optimal approach. In practice, the uncertainty grows with time, and although errors are reduced to a minimum when the UAV crosses the CP, the spread rate is not observable, and the errors continue to grow shortly after updating the location with the nearby CP. The resulting trajectories and performance improve the benchmark strategy results (see [27]). Figure 14 demonstrates that the GUS and the benchmark have very different performance for a scenario with a wind. The GUS reduces the uncertainty much more over the time of the mission.

8. SLUGS II Validation

8.1. System Configuration. The platform used for the first flight tests was a Phoenix R/C aircraft; later the platform was changed to a Hobby King Bixler 2 (demonstrated in Figure 15). Both of the planes are low-cost foam kits and have a flying weight of approximately 2 lbs. The Phoenix and the Bixler 2 both feature a pusher propeller configuration that reduces vibration and increases overall robustness for a belly landing (neither aircraft has landing gear). The wings and fuselage are reinforced with carbon fiber tubes that provide ample rigidity to the airframe [38]. The aircraft is hand launched for take-off. The Bixler 2 wings are almost an elliptical platform with curved winglets for increased flight efficiency. The power plant for the Bixler 2 aircraft is a 1200 kV brushless DC electric motor. The power source used is a 2200mAh Lithium Polymer battery. This battery provides sufficient current for the electric motor, servo, and the AUAV3 autopilot board, through the Electronic Speed Controller (ESC). The ESC provides a 5.0 volt supply to the servos and the AUAV3 autopilot through the Battery Eliminator Circuit (BEC) and also provides a control signal and power to the brushless motor. The BEC is designed to keep servos R/C receiver running while the baking has dropped too far in voltage to power the motor. The SLUGS II autopilot, like most other autopilots, uses a Proportional-Integral- Derivative (PID) control method for the low-level control loops [32]. The flight controller is developed as a Simulink model, and although it is relatively easy to alter its structure, it requires extensive knowledge about the inner and outer loop structure to redesign the controls. The simulation tests were devoted to validating the viability of the flight controller as flyable. This part of the testing covers the tuning process of the PID gains for the various autopilot control loops.

Figure 16 shows the outcome of the integration with the basic real-time components; autopilot, the UAV platforms, and the GCS a moment before performing a field test.

8.2. Flight Test. The goal of the SLUGS II validation is to support the R&D monitoring system development. The validation relies on several factors, including flight controller and path following performances. The flight controller has been extensively tested within the simulation. The environment supports parameter tuning which can accommodate hardware changes and flight mode extensions.

The most important feature of the SLUGS II autopilot for the R&D monitoring system is its autonomous waypoint navigation capabilities. The ground operator, through the GCS interface, can specify a sequence of waypoints to define the path the vehicle should follow. Figure 17 describes an example of a running scenario with four waypoints and shows how the vehicle follows the desired path while tuning PID gain parameters. Figure 17 shows that initially, the gains were too low, and the system had a slow response.

9. Conclusions

In summary, this paper has presented the design for monitoring system with a core methodology for coordinating a fleet of UAVs to suppress the uncertainty of a generic ground phenomenon. The coordination technique integrated with a R&D monitoring system which was designed carefully to improve the estimation of a propagated periphery supports decision making in an operational scenario.

The system design comprises the major components of a R&D monitoring system: high-level-controller, flight control, and ground control. The development process of the UAV flight controller (autopilot) has been improved with COTS board and a new development environment for software validation. The SLUGS II autopilot obtains the same functionality in the migrated Simulink model as found in the original model. The generated code uses on average a 60% CPU loading. The reserve computation time leaves enough computational resources for further enhancement and evolution.

MSIL simulation tests the generated code in a flexible and friendly environment that is committed to the sequence of events in the software rather than to guarantee strong real-time performance execution of the code. The system is designed to be agnostic as to the type of phenomenon that is being tracked and can be made to work well for a number of different scenarios.

Wildfire incidents are an example of a stochastic phenomenon, and knowing the fire boundary with high certainty would improve decision making by the ground team.

https://doi.org/10.1155/2018/6892153

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] National Interagency Fire Center. Federal Firefighting Costs (Suppression only). page 1987, 2016.

[2] Fire. NIFC, "Initial Attack," Tech. Rep., 2016.

[3] a Nalyses P Art, Forest Service, Martin E Alexander, David a Thomas, and Dale Bosworth. Fire Management W ILDLAND F IRE S TUDIES AND. Management, 63(3):1-96, 2003.

[4] NIFC, "National Interagency Coordination Center Wildland Fire Summary and Statistics," Technical report, 2013.

[5] S. Rabinovich, R. E. Curry, and G. H. Elkaim, "A methodology for estimation of ground phenomena propagation," in Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), pp. 1239-1244, Monterey, CA, April 2018.

[6] UAV Over-the-Horizon Disaster Management Demonstration Projects Project Manager: Steve Wegener February 2000 Contents. (February), 2000.

[7] C. Yuan, Y. Zhang, and Z. Liu, "A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques," Canadian Journal of Forest Research, vol. 45, no. 7, pp. 783-792, 2015.

[8] N. Nigam, "The Multiple Unmanned Air Vehicle Persistent Surveillance Problem: A Review," Machines, vol. 2, no. 1, pp. 13-72, 2014.

[9] X. Cui, T. Hardin, R. K. Ragade, and A. S. Elmaghraby, "A swarm-based fuzzy logic control mobile sensor network for hazardous contaminants localization," in Proceedings of the 2004 IEEE International Conference on Mobile Ad-Hoc and Sensor Systems, pp. 194-203, USA, October 2004.

[10] G. Kowadlo and R. A. Russell, "Robot odor localization: a taxonomy and survey," International Journal of Robotics Research, vol. 27, no. 8, pp. 869-894, 2008.

[11] B. Ristic, D. Angley, B. Moran, and J. L. Palmer, "Autonomous multi-robot search for a hazardous source in a turbulent environment," Sensors, vol. 17, no. 4, 2017.

[12] B. Yamauchi, "Frontier-based exploration using multiple robots," in Proceedings of the 2nd International Conference on Autonomous Agents, pp. 47-53, ACM, May 1998.

[13] W. Burgard, M. Moors, and F. Schneider, "Collaborative Exploration of Unknown Environments with Teams of Mobile Robots," in Advances in Plan-Based Control of Robotic Agents, vol. 2466 of Lecture Notes in Computer Science, pp. 52-70, Springer Berlin Heidelberg, Berlin, Heidelberg, 2002.

[14] S. Russell and P. Norvig, Artificial Intelligence A Modern Approach, 2013.

[15] R. Elaine, K. Knight, and B. Shivashankar, Artificial Intelligence-Rich- Knight-Nair.pdf, 2009.

[16] M. T. Rashid, M. Frasca, A. A. Ali, R. S. Ali, L. Fortuna, and M. G. Xibilia, "Artemia swarm dynamics and path tracking," Nonlinear Dynamics, vol. 68, no. 4, pp. 555-563, 2012.

[17] M. T. Rashid, A. A. Ali, R. S. Ali, L. Fortuna, M. Frasca, and M. G. Xibilia, "Wireless underwater mobile robot system based on ZigBee," in Proceedings of the 2012 International Conference on Future Communication Networks, ICFCN2012, pp. 117-122, Iraq, April 2012.

[18] A. A. Ali, L. Fortuna, M. Frasca, M. T. Rashid, and M. G. Xibilia, "Complexity in a population of Artemia," Chaos, Solitons & Fractals, vol. 44, no. 4-5, pp. 306-316, 2011.

[19] K. Arnold, "Uses of aerial photographs in control of forest fires," Journal of Forestry, vol. 49, pp. 26-31, 1951.

[20] A. Richards, J. Bellingham, M. Tillerson, and J. How, "Coordination and control of multiple UAVs," in Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit 2002, USA, August 2002.

[21] D. W. Casbeer, D. B. Kingston, R. W. Beard, and T. W. Mc lain, "Cooperative forest fire surveillance using a team of small unmanned air vehicles," International Journal of Systems Science, vol. 37, no. 6, pp. 351-360, 2006.

[22] K. Alexis, G. Nikolakopoulos, A. Tzes, and L. Dritsas, "Coordination of Helicopter UAVs for Aerial Forest-Fire Surveillance," Applications of Intelligent Control to Engineering Systems, vol. 39, Article ID 169193, pp. 169-193, 2009.

[23] W. Ren and R. W. Beard, Distributed consensus in multi-vehicle cooperative control, 2008.

[24] F. Bullo, J. Cortes, and S. Martinez, Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms, 2009.

[25] N. Dadkhah and B. Mettler, "Survey of motion planning literature in the presence of uncertainty: Considerations for UAV guidance," Journal of Intelligent and Robotic Systems: Theory and Applications, vol 65(1-4), pp. 233-246, 2012.

[26] R. C. Smith and P. Cheeseman, "On the representation and estimation of spatial uncertainty," International Journal of Robotics Research, vol. 5, no. 4, pp. 56-68, 1986.

[27] S. Rabinovich, E. Renwick, and H. Gabriel Elkaim, "Multiple Unmanned Air Vehicle Coordination for Monitoring of Ground Phenomena Propagation," in Proceedings of the ION GNSS+ 2018. IEEE, 2018.

[28] C. E. Renwick, Estimation and control with quantized measurements, MIT press, 1970.

[29] A. Girard, A. Howell, and J. Hedrick, "Border patrol and surveillance missions using multiple unmanned air vehicles," in Proceedings of the 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601), pp. 620-625 Vol.1, Nassau, Bahamas, December 2004.

[30] H. Chao, Y. Cao, and Y. Chen, "Autopilots for small unmanned aerial vehicles: A survey," International Journal of Control, Automation, and Systems, vol. 8, no. 1, pp. 36-44, 2010.

[31] S. Rabinovich, Multi-UAV Coordination for Uncertainty Suppression of Natural Disasters, Phd, UC Santa Cruz, 2018.

[32] I. Mariano and Lizarraga F., DEsign, Implementation and Flight Verification of A Versatile and Rapidly Reconfigurable Uav Gnc Research Platform, 2009.

[33] E. Geaney, UC Santa Cruz Electronic Theses and Dissertations.

[34] R. Garcia and L. Barnes, "Multi-UAV simulator utilizing xplane," Journal of Intelligent & Robotic Systems, vol. 57, no. 1-4, pp. 393-406, 2010.

[35] NickArsov. AUAV3.

[36] Lubin. Kerhuel, Simulink--embedded target for pic.

[37] MatrixPilot. MatrixPilot Auto-Pilot, 2016.

[38] P. Kumar, J. E. Steck, and S. G. Hagerott, "System identification, HIL and flight testing of an adaptive controllerona small scale unmanned aircraft," in Proceedings of the AIAA Modeling and Simulation Technologies Conference 2015, USA, January 2015.

Sharon Rabinovich (iD), Renwick E. Curry, and Gabriel H. Elkaim

Computer Engineering, University of California Santa Cruz, Santa Cruz,

California, USA

Correspondence should be addressed to Sharon Rabinovich; srabinov@ucsc.edu

Received 16 October 2018; Accepted 11 November 2018; Published 18 November 2018

Academic Editor: L. Fortuna

Caption: Figure 1: Progression map of the Martin Incident. The image is processed after the incident and relies on number of sources (from CAL FIRE).

Caption: Figure 2: Setup of the boundary representation approach. [CP.sub.i] and [CP.sub.j] (square) are two of many grid points representing the close predicted periphery (in blue). The Origin (circle) is the starting-point of the propagated phenomenon, and the UAVs are used to collect observations.

Caption: Figure 3: A periphery estimation with a single UAV for an autonomous mission is illustrated. The red line represents the actual periphery, and the blue line represents the estimated one. The UAV flies over the explored area autonomously. The line of sight to one of the CPs illustrates the directional effect of arbitrary CP. The QKF method is employed on all the CPs simultaneously, and the UAVs identify the current highest uncertainty to approach next (originally presented in [5]).

Caption: Figure 4: SLUGS II code generation workflow, with a new verification step, Multi-UAV Software in the loop (MSIL).

Caption: Figure 5: Multi-UAV Monitoring System, software block diagram.

Caption: Figure 6: AUAV3 board.

Caption: Figure 7: SLUGS II basic components.

Caption: Figure 8: The graphical user interface (GUI) of the GCS is presented. The open-source software (Qt-Ground-Control: QGC) is adopted and extended to support the design of a multi-UAV monitoring system. The software supports the planning and visualization of the UAVs' trajectories in real time.

Caption: Figure 9: MSIL block diagram.

Caption: Figure 10: MHIL block diagram.

Caption: Figure 11: Initial setup. The UAVs are at the final stage of the deployment phase and located on the opposite sides of the boundaries. The actual periphery is a solid red line, and the predicted periphery is a dashed blue line. The error bar associated with an arbitrary CP represents its current perpendicular uncertainty (Iff). Note that the error bar is equal and results in a predetermined prediction that is based on a maximal spread rate.

Caption: Figure 12: Estimation and coordination with the GUS method. The UAVs switched from the deployment phase to track the highest uncertainties. The actual periphery is a solid red line, and the predicted periphery a blue dashed line. The UAV trail is in green where the UAV is OUT and in black where the UAV is IN. The error bar associated with each CP represents its current uncertainty. Note that the error bar decreases as the UAV approaches a CP and that the observations cause the directional uncertainty of the other CPs to decrease.

Caption: Figure 13: Performance analysis. The solid red line represents the average perpendicular standard deviation, the dashed green line shows the cumulative root mean squared error, and the dashed red line is the combined performance measure. Note that the mean value of the uncertainty is reduced during the mission, and the error increases as the periphery evolves since the number of crosses per AOI get smaller.

Caption: Figure 14: A comparison of strategies with a southwest wind. The solid blue line and the dotted red line represent the combined RMSE performance measure over time for the benchmark and the GUS strategies accordingly.

Caption: Figure 15: RC model plane: Hobby King Bixler 2.

Caption: Figure 16: Experiment hardware is shown. On the right, two airplanes model that have been used during the field test. On the left, the GCS deployed on the field.

Caption: Figure 17: Simulated scenario with a single UAV is presented. The UAV trajectory is in X Y Cartesian coordinate frame and is relative to the Home position. The first segment of the trajectory started from take-off controlled manually by the safety-pilot (RC) and switched to autonomous mode after 23 seconds. Three laps were tested with different PID gains for tuning the roll command.
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Rabinovich, Sharon; Curry, Renwick E.; Elkaim, Gabriel H.
Publication:Journal of Robotics
Date:Jan 1, 2018
Words:8332
Previous Article:Robust Feedback Control Design of Underactuated Robotic Hands with Selectively Lockable Switches for Amputees.
Next Article:Biologically Inspired Robotics 2016.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters