Printer Friendly

A Study for Development of Autonomous Paddy-weeding Robot System -An experimentation for autonomously straight-running based on compass-compensation-.

1 INTRODUCTION

In recent years, aging and labor shortage are rapidly progressing also in the agricultural sector of Japan. Because of this reason, it is indispensable for a technology concerning labor-saving of agricultural work and improvement of productivity. Therefore, introduction of a robot in the agricultural field has been pushed forward, aggressively.

Agricultural robots are roughly classified into a vehicle type robot typified by a tractor [1, 2], a manipulator type robot performing a movement similar to a human hand [3, 4], a facility type robot performing a milking operation, a power assist type an assistance of a working of a person [5, 6]. Even among them, our group has been conducting research on weeding robots, especially based on the vehicle type robot [7, 8]. In other words, a paddy-weeding robot is described in fig. 1 has been focused on.

Even when it is said to be a weeding robot in general the range is extensive and complicated; working environments according to purpose are also diverse, such as weeding on the levee or wet rice culture, grass cutting on the slope of the inter-mountain areas.

The concept of the weeding robot system will be developed is shown in fig. 2. Examples of these element technologies include a traveling section having fine running performance suitable for various working environments, a weeding section for removing weeds, a sensor and control system for autonomous working, and an ICT for providing information to the labor in the remote area. In this paper, the traveling control algorithm for autonomous working will be focused on.

It's necessary to avoid an obstacle autonomously for enable to autonomous control the weeding robot. However, the working environment is often in a non-maintenance environment. For this reason, it's indispensable to develop a flexible behavioral algorithm what to avoid obstacle by utilizing the detection and recognition that. In this paper, we propose a method to enable autonomous traveling of weed control robots.

This paper is organized as follows: In section 2, how difficult to control autonomously in a situation of weeding will be stated. Further, how to estimate a direction and distance within a boundary, will be motivated. Moreover, details about the noise reduction method for using a magnetic compass in a situation of estimate a direction, and the forward monitoring method using edge detection and deep-learning package, will be stated. In section 3, a verification experiment configuration will be described. In section 4, the summary of this work is concluded.

2 APPLICATIVE RANGE OF THE ROBOT SYSTEM AND PROBLEM OF WEEDING

As mentioned above, the working environment is often a non-maintenance environment, has been stated. Furthermore, since the road surface on which the robot travels will be under non-maintenance environment, there is a possibility that the revolutions and speed of the motor won't be coincide with each other. In this case, it is difficult to estimate the speed from only the rotation speed, or estimate the direction. Further, in case of the robot will be operated near the boundary with the outside of the working area, will be considered. There is a possibility that it may deviate out of the area, in the worst case if the turning behavior won't be performed normally. Or perhaps there is a possibility that the robot will leave from management or the robot itself will be damaged. Moreover, even with a non-contact sensor, depending on the weather conditions of the outside world, unexpected information may be mixed with the information obtained from the sensor. In this case, it may the robot act what is not intended by the designer. As a result unfavorable phenomena may occur from the viewpoint of productivity and safety.

In order to solve the problems mentioned above, the crawler-type prototyped robot in this study is shown in fig. 3. This robot is equipped with a GPS module, an ultrasonic sensor, magnetic compass module with gyroscope and accelerometer, and a camera module.

If an obstacle will be detected forward by the ultrasonic sensor, the robot will be stopped temporarily. Next, the front will be photographed by the camera module, and the object will be identified by deep-learning. In this case, when a harmful to human is considered or an object which may possibly fall or be damaged will be detected, the position is recorded by GPS and rapidly retreated. When approaching that point after a certain period of time deceleration. At this time if an object is not detected the point, the position is deleted and normal running will be resumed.

Moreover, in addition, we will try to verify its practicality in order to achieve the following items by using the camera module that installed in the robot system, and the software group that installed in Raspberry Pi 3 Model B+ of the system.

(1) Separation of workspace boundary area for images taken by monocular camera.

(2) Discrimination of out of boundary area or obstacle using deep-learning (Google Inception v3).

(3) Whether or not the continuing ability of working by counter-rotation turn based on the result of (1) or (2).

2.1 A Strategy of Operate of Weeding

In the former study, the proposed method cannot traveling along the wet rice line straightly, and reach the edge of the field, autonomously. Moreover the method of the turning technique that is necessary to turn back and enter the adjacent row of rice lines, but it is a challenge to recognize the edge of the field because of the system had been driven by remote-controlled. Therefore, in this study, the image processing method will be applied for straight run and turn like a spin-turn at the edge of the field as shown in fig. 4. A decision of the turning point based on the photographed result using edge detection is as shown in fig. 5.

2.2 How to Spin-turn using Magnetic Compass?

As mentioned earlier, it is difficult to turn right or left at an angle using the difference of the rotation angle of the left and right motor, precisely. Therefore, in this study, the magnetic compass of the 9-axis sensor complex MPU-9250, will be focused on.

Generally, when the magnetic compass is made horizontal to the ground and rotated 360 [deg.], The geomagnetic data in the x-axis and y-axis directions are plotted in a form that draws a circle. Based on this result, if north will be defined as 0 [deg.] and assuming that the magnetic flux density on the x-axis of the sensor is [G.sub.x] and the magnetic flux density on the y-axis is [G.sub.y] the azimuth of the robot to which the sensory output will be given as follows[9]:

[theta] = [tan.sup.-1][G.sub.y]-[G.sub.y,off]/[G.sub.x]-[G.sub.x,off] (1)

[G.sub.x,off] and [G.sub.y,off] are offset amounts for canceling the magnetic noise emitted from the motor, Raspberry Pi, a GPS module, or other electronic devices.

On the other hand, the magnetic compass of MPU-9250 used in this study is 3-axes and doesn't consider the attitude of the sensor, as pitching. Thus, as the attitude of the sensor changes, the influence of the geomagnetism in the vertical direction affects the geomagnetism of the x-y axis. As a result, it gives an error in the estimated direction. Therefore, to estimate the roll and pitch angle of the sensor from the 3-axis acceleration sensor of MPU-9250, moreover, try to correct the geomagnetism based on that value. By using the information on the 3-axis accelerometer, [a.sub.x], [a.sub.y], [a.sub.z], the roll angle [phi] and the pitch angle [psi] can be calculated as follows:

[phi] = [tan.sup.-1][a.sub.y]/[a.sub.z] (2)

[psi] = [tan.sup.-1][-a.sub.x]/[a.sub.y]sin [phi] + [a.sub.z] cos [phi] (3)

Therefore, by using this angle information and the magnetic compass, the direction [upsilon] is as

follows:

[upsilon] = [tan.sup.-1]A/B (4)

A = ([G.sub.z] - [G.sub.z,off]) sin [phi] - ([G.sub.y] - [G.sub.y,off]) cos [phi] (5)

B = ([G.sub.x] - [G.sub.x,off]) cos [psi] + ([G.sub.y] - [G.sub.y,off]) sin [psi] sin [phi] + ([G.sub.z] - [G.sub.z,off]) sin [psi] cos [phi] (6)

However, in case of detecting of the direction, the result of the magnetic compass may be the mixed with a high frequency noise that make computation of Raspberry Pi or driving noise of motor, or other device during control. Thus, in this study, a median filter and a low-pass filter will be applied to reduce these noises. In this study, we will apply this azimuth estimation algorithm in order to spin-turn. Firstly, the median filter is an effective method that can, to some extent, distinguish out-of-range isolated noise from legitimate sensor features such as edges and lines. Specifically, the median filter replaces a 1-dimensional array by the median, instead of the average, of all values in a neighborhood w of time-series sensory information:

y(i, j) = median{x(i, j) [member of] w} (7)

where w represents a neighborhood defined by the user, centered around location [m, n] in the time-series of sensor information.

Moreover, in secondly, a low-pass filter will be applied for reduction of noises. In this study, the low-pass filter is given by the following difference equation:

y(nT) = x(nT) + x[(n-1) x T] n = 0, 1, 2,... (8)

where x(n) is the filter input amplitude at time n, and y(n) is the output amplitude at sampling time n. Further, where T is the sampling interval in seconds.

2.3 How to Detect Edge of Boundary Area and Obstacle?

For item (1) of section. 2, the edge detection was performed using the horizontal-direction Prewitt filter is described in eq. (9), and it was decided to cut out an area that could become a boundary line from the result (fig. 6). By carrying out the filtering, it is possible to distinguish the traveling road surface and the slope such as a boundary of workspace with respect to the photographed image.

[mathematical expression not reproducible] (9)

Further, by specifying which part of the photographed image the boundary line obtained by the edge processing exists, the distance of the robot. From this result, the distance to the end of traveling road surface can be specified, moreover, the turning point will be determined from the photographed image, and the turning behavior will be taken.

2.4 How to Predict and Compensate the Action Command in Paddy-field?

In general, bottom of the paddy field is not necessarily flat. That includes mud, weed, or other things. Sometimes the wheel of robot or other vehicle-typed machine will be affected by these things; it will get stuck or movement course will force to change. At worse, it may occur that damage to rice or perhaps there is a possibility that the robot will leave from management or the robot itself will be damaged, as mentioned in section 2. To avoid these problems, prediction method will be focused on, in this study. The robot will be obtained course of action based on the prediction result if the prediction module will be mounted. In this paper, the time series of compass-direction [upsilon] will be focused on.

For controlling a robot in a dynamic environment, an action can be chosen based on the current result and former actions and states [10, 11]. In other words, we can obtain the predicted result of the compass-direction and action command for DC motors if we can obtain the former compass-direction [upsilon] series. The mechanism of this method is as follows in fig. 7. To achieve of this mechanism, the following equations for controlling DC motor based on the compass, will be structured.

[PWM.sub.right] = PWM - u (10)

[PWM.sub.left] = PWM + u (11)

u = [k.sub.sync] x ([upsilon][t] - [upsilon][t - 1]) (12)

In other words, use the P control for caterpillar synchronization when the crawler runs straightforward because the rotation angle of DC motors is not same even though same PWM is applied (shown in fig. 8).

3 THE VERIFICATION EXPERIMENT

3.1 Outline of the Experiment

In this experiment, the way of two experiments will be carried out; firstly, in order to confirm the usefulness of the proposed method, the crawler will be able to running straightforward (fig. 9), will be considered. Next, secondly, a ridge of rice fields as the terminal wall will be taken as an example (fig. 10). At this time, the actual taking a picture of forward view and traveling that including running straight or spin-turning, will be repeated. During this process, to verify that it is possible for the robot to turn at a right angle near the ridge.

3.2 Experimental Results

3.2.1 An Experimentation of Straightrunning Performance of the Crawler

In this experiment, the crawler had been put on the flat floor (fig. 9). The results of actually applying the proposed method to the straightforward are shown in fig. 11. In this way, it was confirmed that it was possible to running straightforward.

3.2.2 An Experimentation of Running Performance of the Crawler

Here, a wall as an example of levee, will be taken. In this experiment, whether it is possible for the crawler to turn at a right angle, will be verified. During the experiment, the crawler will actually take a picture of a front view for observes the situation, and run or spin-turn, repeatedly. The running record of the crawler will be shown in fig. 13 and 14. From the result, after straight running for a certain period of time, it was confirmed that was turned to the right as seen from the direction of travel. Further, it continued straight running for a certain period of time, has been confirmed.

3.3 Discussion on Experimental Results

Now, let's focus on these results. In figure 17(d), the result of actually applying the proposed method to the captured image. In this experiment, the horizontal-direction Prewitt filter had been applied to detect an edge of the boundary area, that has been shown in fig. 17(b). From the result, the sum of the pixel will be calculated to detect the boundary area (fig. 17(c)). In this way, it was confirmed that it was possible to distinguish the water surface and the edge of the wall of the traveling road surface. As similar, in fig 18, the crawler had taken pictures of front to detect the edge of the levee and the surface. In this way, the crawler can detect the edge and obtain the location of spin-turn at the boundary area. Figure 18 had been taken by the crawler to obtain the location of spin-turn. In this figure, an area of the picture had been occupied over 70 percent by the levee. Moreover, it was confirmed that it has robustness to color change due to the surrounding environment, since it is also possible to divide even in the image different from figs. 6 and 17. Next, let's consider about the tracking of movement. Figures 13 and 14 are tracking of movement for the crawler running. In figure 13, the azimuth of the crawler had been hold same direction (almost 100 degrees). However, after 20 seconds, the azimuth had been increased negative-direction and approaching near 10 degrees from initial direction. Finally, it had been converged around 10 degrees. Next, figure 14 will be focused on. Firstly, the crawler had been running straightly. Then, it had been turned and changed movement direction. In this way, it can be confirmed that the crawler had been moved straight or spin-turn based on an edge-detection. Now, let's focus on result of the magnetic compass. The time-waveform of the azimuth obtained by the magnetic compass when during spin-turning (fig. 13) contains many variations that are seen as noise. As this factor, the following issues can be considered:

* Changes in body posture due to unevenness of running road surface.

* A magnetic field changes by the left and right motors at the time of spin-turn.

* A magnetic field changes due to increased power consumption by calculation of Raspberry Pi.

Regarding this point, it can be reduced a certain amount of noise by stopping for a certain period of time at the spin-turning, then measuring the offset amount and correcting it, again. From these viewpoints, we conclude the experimental results are reasonable.

4 CONCLUSION

In this study, we considered a method of self-generating a behavioral strategy by generating a map of the activity-field by itself, while acting in a non-maintenance environment. As a preliminary step, in this paper, we had considered an autonomously-drive algorithm using a self-running mobile crawler-typed robot equipped with the compact monocular camera shown in fig 3. In detail, a turning algorithm that swiveled the spin-turn based on the magnetic compass of MPU-9250 at arbitrary positions by result of the image processing and recognition, had been considered.

In the verification experiments, the effectiveness had been confirmed by implementing the turning algorithm; the crawler had spin-turned in the 9 o'clock direction at the point where the boundary area; end of the workspace had been recognized, and then had been running straight again. As a result of the verification experiment, as the levee had recognized as a location of the termination as shown in fig. 18 by the crawler, the behavior of the spin-turning had been confirmed, and the effectiveness of the proposed method has been confirmed. As the future work, learning actual objects related to these grasses, assuming the situation of grass cutting and weeding, will be planned. However, there are cases where the weeds are affected by the wind or other things or that will be shaken. To deal these problems, also, an implementation of image sharpening and evaluation of its usefulness is subjected to be studied in the future. In addition, the proposed system will be separated; Raspberry Pi as the sensing and communication function, and Arduino as the driving control function of the motor control or other works. From this plan, load balancing and real-time performance will be improved.

Moreover, the previous study [12] will be applied to behavioral algorithms more efficiency, so that the proposed system can travel and work at constant speed on ramps including uphill roads such as levee will be possible. At the same time, it is necessary to investigate algorithms to introduce multiple robots with different body structures or different roll with individual roles into the workspace, to work cooperatively and efficiently [13].

ACKNOWLEDGMENT

This work was supported by JST A-STEP 18067541.

REFERENCES

[1] S. Yaghoubi, N. A. Akbarzadeh, S. S. Bazargani, S.S. Bazargani, M. Bamizan, and M. I. Asl, "Autonomous robots for agricultural tasks and farm assignment and future trends in agro robots," International Journal of Mechanical & Mechatronics Engineering, Vol.13 No. 3, pp.1-6, 2013.

[2] K. Hosotani, Y. Takayama, M. Kato, H. Inoue, H. Sori, S. Urushihara, and M. Sugimoto, "Climbing performance of the flexible spiked crawler track," In Proceedings of the 2018 JSME Conference on Robotics and Mechatronics (ROBOMEC2018), pp.1-4, 2A1-G05, 2018.

[3] S. S. Mehta and T. F. Burks, "Vision-based control of robotic manipulator for citrus harvesting," Computers and Electronics in Agriculture, Vol.102, pp.146-158, 2014.

[4] L. Bascetta, M. Baur, and G. Gruosso, "ROBI': A prototype mobile manipulator for agricultural applications," electronics, MDPI, Vol. 6 No.39, pp.1-23, 2017.

[5] H. Inoue and T. Noritsugu, "Development of upper-limb power-assist machine using linkage mechanism - Mechanism and its fundamental motion," International Journal of Automation Technology, Vol. 8 Issue 2, pp.193-200, 2014.

[6] Y. Imamura, T. Tanaka, and K. Takizawa, "Field test on KEIROKA (Fatigue-reduction) effect and physical fitness change by assistive device for care work," Journal of Nursing Science and Engineering, Vol. 2 No. 3, pp.142-149, 2015.

[7] Y. Inoki, M. Sugimoto, H. Inoue, M. Kato, H. Sori, S. Urushihara, and K. Hosotani, "A study for construction of autonomous behavior strategy in control of agricultural machinery," In Proceedings of the Japan Society for Precision Engineering 2018 Spring Meeting, pp.835-836, 2018.

[8] M. Sugimoto, Y. Inoki, T. Shirakawa, K. Takeuchi, T. Yamaji, M. Endo, H. Inoue, M. Kato, S. Urushihara, K. Hosotani, and H. Sori, "A Study for Development of Autonomous Paddy-weeding Robot System -An experimentation for autonomously workspace-cognition and counter-rotation turn-," In Proceedings of The Fourth International Conference on Electronics and Software Science, pp.115-124, 2018.

[9] P. Ripka, Magnetic Sensors and Magnetometers (Artech House Remote Sensing Library), Artech House on Demand, 2001.

[10] M. Sugimoto, K. Kurashige, "A Study of Effective Prediction Methods of the State-Action Pair for Robot Control Using Online SVR," Journal

of Robotics and Mechatronics, Vol.27, No. 5, pp.469-479, 2015.

[11] M. Sugimoto, and K. Kurashige, "Future Motion Decisions using State-action Pair Predictions," International Journal of New Computer Architectures and their Applications, Vol. 5 No. 2, pp.79-93, 2015.

[12] M. Sugimoto, N. Iwamoto, R. W. Johnston, K. Kanazawa, Y. Misaki, H. Inoue, M. Kato, H. Sori, S. Urushihara, K. Hosotani, H. Yoshimura, and K. Kurashige, "Dimensionality reduction for state-action pair prediction based on tendency of state and action," International Journal of New Computer Architectures and their Applications, Vol. 7 No. 1, pp.18-28, 2017.

[13] M. Sugimoto, "A study for dynamically adjustmentation for exploitation rate using evaluation of task achievement," International Journal of New Computer Architectures and their Applications, Vol. 8 No. 2, pp.53-60, 2018.

Masashi SUGIMOTO (1*)

Kanta TAKEUCHI (1)

Toshiyuki YAMAJI (1)

Hiroyuki INOUE (3)

Kazunori HOSOTANI (3)

Yasuhiro INOKI (1)

Daiki YOSHIOKA (1)

Mio ENDO (1)

Manabu KATO (3)

Tomoki SHIRAKAWA (1)

Haruki FUKITA (1)

Patchara NAMEEYA (2)

Shiro URUSHIHARA (4)

Hitoshi SORI (3)

(1)Department of Electronic Systems Engineering, National Institute of Technology, Kagawa College, 551 Kohda, Takuma, Mitoyo C., Kagawa Pref., 769-1192 Japan. Email: sugimoto-m@es.kagawa-nct.ac.jp (*: corresponding author)

(2)Department of Electronic and Telecommunication Engineering, Rajamangala University of Technology Thanyaburi, 39 Moo 1, Rangsit-Nakhonnayok Rd, Thanyaburi, Khlong Luang, Pathum Thani, 12110 Thailand. Email: Patchara.Na@hotmail.com

(3)Department of Integrated Science and Technology, National Institute of Technology, Tsuyama College, 624-1 Numa, Tsuyama C., Okayama Pref., 708-8509 Japan. Email: {inoue, kato, hosotani, sori}@tsuyama-ct.ac.jp

(4)Department of Electrical and Computer Engineering, National Institute of Technology, Kagawa College, 355 Chokushi, Takamatsu C., Kagawa Pref., 761-8058, Japan. Email: urushi@t.kagawa-nct.ac.jp
COPYRIGHT 2018 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sugimoto, Masashi; Takeuchi, Kanta; Yamaji, Toshiyuki; Inoue, Hiroyuki; Hosotani, Kazunori; Inoki, Y
Publication:International Journal of New Computer Architectures and Their Applications
Geographic Code:9JAPA
Date:Oct 1, 2018
Words:3726
Previous Article:Circuit Analyses and Considerations on Advanced Inverters Constructed by Minimum Circuit Components - Pursuit for concise PCS of photovoltaic power...
Next Article:Variation Effect of Silicon Film Thickness on Electrical Properties of NANOMOSFET.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters