Printer Friendly

Study on the Detection of Dairy Cows' Self-Protective Behaviors Based on Vision Analysis.

1. Introduction

In recent years, the income of individual dairy cow breeders has declined due to the increase in feed prices and labor costs. This has led to the rapid development of large-scale farming [1]. Although large-scale centralized breeding has many advantages, it also brings many new problems. In particular, excessive concentration of excreta causes a large number of harmful dipteral insects, which attack and cause stress to dairy cows. This not only causes changes in the normal behavior of cows, but also causes decreases in diet and milk production and imbalances in homeostasis and weakens immunity. The spread of disease through bites puts a great constraint on the health and productivity of dairy cows and affects the development of the dairy industry [2-4]. Self-protective behavior describes the instinct of animals to protect their own body and maintain stable physiological indices. Studies show that when dipteral insects invade, dairy cows exhibit defensive self-protective behaviors such as tail swishing, head throwing, leg stamping, ear twitching, and skin twitching [5]. There is a positive correlation between the self-protective behaviors of dairy cows and the number of dipteral insects irritating dairy cows' skin. Observing the behavior of dairy cows helps to understand the activities and living patterns of cows under different living conditions and to predict their future behaviors [6]. The research on the self-protective behaviors of dairy cows is helpful to evaluate the breeding environment and animal welfare status, which has great practical significance.

The milk production of dairy cows differs greatly among species. This is determined by the individual temperament types of dairy cows [7]. The team of Ao Ri proposed that the degree to which dairy cows are affected by insects is related to temperament type. Even if cows of different temperament types are kept under the same environmental conditions, the self-protective behaviors displayed when exposed to insects have obvious differences [8]. It is believed that the study of the differences in the self-protective behaviors of dairy cows is of great significance for temperament breeding. At present, most of the research in this area is based on artificial observation methods [2-8], which usually requires at least two people. This manual observation method has high work intensity and lower efficiency and accuracy, which restricts the progress of the research.

Research on the behavior of dairy cows is mainly carried out by monitoring physical parameters and physiological indicators. There are two main methods for automatic monitoring. One method is to install sensors on the cows, but interference from the sensors being shaken or bumped will affect the accuracy of test data. The other method is to use machine vision, but the existing literature does not report on automatic detection of dairy cows' self-protective behaviors. For example, Nadimi et al. used wireless sensors to measure the rotational angle and speed of cows' necks [9]. Kwong monitored cows' disease and lameness through wireless sensors [10]. Martiskainen used 3D sensors and support vector machines to automatically detect cows' daily behavior patterns [11]. Liu Dong et al. in 2016 proposed a hybrid Gaussian model dynamic background modeling method that can effectively track a single moving cow for 15 seconds, but this method cannot track a stationary foreground object for a long time [12]. Gu et al. proposed to analyze the movement behavior of cows in a tunnel by detecting the characteristics of the cows' hooves and backs based on the minimum bounding box and contour map [13]. Xiao et al. used ellipse fitting and minimum distance matching methods that minimize the cost function to track the target based on multiobjective segmentation, which can effectively detect information on pigs' motion [14]. However, it is mentioned that using the automatic single-threshold foreground segmentation method is not good for the identification of colored pigs; nor is it suitable for the black and white bodies of cows in this article. By using the Kinect method based on the multisensor 3D image acquisition device, Zhao et al. achieved accurate segmentation of body regions such as heads, necks, torsos, and limbs of dairy cows in depth images [15]. Deng et al. realized the identification of cows' body parts based on depth images. However, this kind of depth image acquisition device cannot detect the movement of tail swishing, so this method is not suitable for the detection of cows' self-protective behaviors [16].

At present, many works have shown being efficient for recognizing actions in videos [17]. Artificial neural network is also a common tool for motion recognition [18]. Most studies of cow tracking are based on target segmentation; there are also studies on segmented identification of dairy cows using depth images. It is easy to identify the cow's head and trunk due to the obvious image characteristics. Many scholars have completed the identification of cows' body parts but have been unable to detect the tail features, because the regions of the tail and legs are very similar. Detecting tail features is important because tail swinging is one of the most frequent and characteristic movements of self-protective behavior. Therefore, it can be seen that there is still a great deal of research space and practical need for the identification of cows' self-protective behaviors.

Tail swishing, head throwing, and leg stamping are the most typical self-protective behaviors in dairy cows. These are very different from normal studied cow behaviors, such as walking, standing, lameness, lying, and resting. According to experts, the cow's self-protective behavior generally lasts for about 1-2 seconds each time; therefore, the algorithm requires higher real-time accuracy. Due to the complex scene and the influence of occlusion and with the foreground targets being black and white dairy cows, which have a low degree of differentiation from the background, the difficulty of identifying self-protective behavior is increased.

The adoption of computer vision technology could solve the problems of relying on labor. In the process of high-intensity repetitive production, machine vision technology can greatly liberate the labor force and improve automation efficiency. Therefore, it is vital to study a method based on video analysis and tracking to replace traditional observational analysis.

2. Principles and Methods

Using the optical flow method to track every pixel in the image makes the running speed very slow. At the same time, fewer corner points will result in incomplete target information and reduces the accuracy of detection. Combined with the morphological characteristics of head, leg, and tail self-protective behaviors, the optical flow tracking algorithm based on Shi-Tomasi corner detection method is improved by eliminating the number of nontarget corner points and supplying the number of target corner points. The calculation efficiency and the accuracy of moving target detection are improved. The flow chart of the proposed method is shown in Figure 1, and the specific steps are as follows:

(1) The Shi-Tomasi corner detection method detects the feature points, while the pyramid Lucas-Kanade optical flow algorithm locates and tracks the feature points in continuous video frames.

(2) Delete the corner points that do not move continuously by using the movement information of the frames.

(3) If the distance of the tracking point trajectory is greater than the threshold, the method is to retain the feature point and exclude the nonforeground target feature point.

(4) When the number of corner points is too small, corner detection is performed again to provide a sufficient number of feature points for optical flow detection.

(5) Use a bounding box of corner points to track self-protective behavior, which could effectively distinguish the different behavior characteristics and provide the necessary feature vector parameters.

The Shi-Tomasi corner detection algorithm is based on an improvement to the Harris corner detection algorithm [19]. Compared with the Harris corner detection algorithm, a nonmaximum suppression step is added, and adjacent Harris corners are removed. The problem of feature points clusters is solved, so that the distribution of corner points becomes more uniform in the detection of the head, legs, and tail area, while reducing the number of feature points.

The Lucas-Kanade optical flow algorithm [20] is a method of detecting feature points by performing two frames; it is used for sparse optical flow tracking with a relatively small number of feature points. The pyramid Lucas-Kanade optical flow algorithm is based on the improved Lucas-Kanade optical flow method. It has a wider range of applications and has a good detection effect for fast motion or discontinuous motion. Pyramid optical flow tracking starts at a larger spatial scale and is then gradually corrected along the pyramid to determine the motion speed.

Through the improved tracking method proposed in this paper, the computational complexity of the tracking algorithm can be greatly reduced. Although the number of feature points is reduced, the distribution of feature points is optimized, and the tracking of feature points can accurately represent the pixels of the self-protective behavior.

3. Test and Analysis

3.1. Test Conditions. The video image is of size 1440 * 1080, computer configuration: Intel (R) Core (TM) i7-4790 CPU @ 3.6 GHz, 16 GB memory, graphics card NV9500GT, 512M memory, Windows 10 operating system, VS2012 programming environment with OpenCV library, and MATLAB R2017a artificial neural network toolbox to create an artificial neural network model.

The experimental site is at Xuri Ranch, Tuoketuo County, Hohhot, Inner Mongolia. An independent cowshed of 2.5 m * 2.5 m is built. The outside of the semiclosed cowshed is surrounded by mesh yarn. A Panasonic HDC-HS100 HD digital video camera is used to track 10 healthy Holstein cows; only one Holstein cow is photographed at a time. Considering that self-protective behavior involves detailed action of the body, the distance is within 3 m, and the height is about 1.4 m for video tracking, at a rate of 25 frames per second.

3.2. Test of Feature Point Detection Algorithm. This article mainly analyzes the typical characteristics of dairy cows' self-protective behavior. This behavior is characterized by sudden and instantaneous movement. In order to improve the efficiency of system implementation, the number of feature points can be optimized. When the target is tracked for the first time, the corner detection method obtains more corner points to initialize the detection image in Figure 2(a), and many corner points are not the foreground target area. When using the optical flow method to track all the feature points, most of the nonmoving feature points waste a lot of computing resources. Figure 2(b) shows the trajectory map that is tracked by the optical flow method. It can be seen that many moving feature points are not tracked for a while, and the effective feature points are not supplied in time, resulting in failure to track behavior. By using the movement information of the frames, deleting corners that do not continue to move can effectively reduce the number of corner points. When the number of effective corner points is too small, corner detection continues again in order to provide a sufficient number of feature points for optical flow detection; this reduces the complexity of the algorithm and improves the detection efficiency. The small circles in Figure 2(c) are the detected feature points. Most of the corners describe self-protective behavior characteristics. At the same time, some nonforeground targets, such as blown gauze in the background, are also detected. In order to avoid these nonforeground targets interfering in the detection of self-protective behavior, the feature points are retained if the tracking distance is greater than the given threshold. This method can effectively reduce the number of nonforeground target feature points such as gauze. Figure 2(d) is the result of tracking self-protective behavior using a bounding box. The bounding box can effectively describe the behavior characteristics and provide the necessary feature vector parameters.

3.3. Detection of Self-Protective Behavior in Complex Conditions. In the feeding bar, parts of the cows' bodies are often blocked by the railings. The algorithm proposed in this paper can realize the detection of the blocked body in a complex environment. The corner points can still be attached to the unblocked body area. The nonmoving feature points are eliminated first, and the continuously moving feature points can still be effectively tracked using the optical flow method. Some feature points will lose tracking information because they have moved to the blocked area, but constantly supplementing the new feature point number ensures the continuous tracking of the moving self-protective behavior. By extracting the feature vector of the bounding box, the common trajectory of multiple moving feature points can be seen as a whole, thereby effectively detecting the blocked self-protective behavior.

In Figure 3(a), the tracking effect is better when the self-protective behavior features are not blocked. Even if the cow is facing away from the camera and the tail is blocked by the fence, the bounding box can still describe the characteristics of the movement, and the self-protective behavior movement can be effectively detected in time.

In Figure 3(b), the guardrails block some areas of the cow's head and body, but this does not affect the effectiveness of the algorithm's detection. It can continue tracking the cow's head and tail movement feature points, track the movement trajectory, and accurately and effectively identify self-protective behavior.

3.4. Artificial Neural Network. 20 typical unblocked videos were selected, including 2840 training samples of tail swishing, 2310 training samples of leg stamping, and 1160 training samples of head throwing. This article extracts 9-dimensional feature vectors: rotation angle, moving distance, horizontal and vertical displacement of feature points, starting point and end point ordinates, area of bounding box, height to width ratio, and height difference in a vertical direction. In order to realize the artificial intelligence to identify cows' self-protective behaviors, an artificial neural network model was established using MATLAB R2017a artificial neural network toolbox on the basis of extracting the characteristic values of dairy cows' self-protective behaviors. The input layer of the artificial neural network is set to 9 (9-dimensional feature vectors), and the output layer is set to 3 (tail swishing, head throwing, and leg stamping).

By using this artificial neural network model, the proportion of training samples, verification samples, and test data needs to be set. This article sets the training sample to 80%, the validation sample to 10%, and the test sample to 10%. The smaller the cross-entropy, the better the classification effect. To prevent overfitting, a validation set is used to improve the accuracy of the classifier model. If the cross-entropy value cannot be reduced for 6 consecutive iterations, the iteration is stopped. As shown in Figure 4(a), the model stops after 19 iterations, while the cross-entropy minimum value is 0.010578 and the best validation performance is at epoch 13. The model shown in Figure 4(b) is a convergence that stops after 54 iterations. The cross-entropy value is 0.0001927. It can be seen that the accuracy of this model is higher.

The receiver operating characteristic (ROC) curve can be used to determine the merits of the classifier model. The true positive rate (TPR) of the classification model is the ordinate and the false positive rate (FPR) is the abscissa. The ROC curve is plotted in Figure 5. The artificial neural network model has a high accuracy and can be used as an effective detection and identification model for dairy cows.

4. Discussion

In order to accurately evaluate the accuracy and effectiveness of the algorithm, this paper compares its accuracy and real-time performance.

4.1. Contrast with Artificial Experience. To evaluate the accuracy of the detection algorithm, a video of a typical cow's self-protective behaviors was selected for the collected 100-hour video. 20 sets of video samples were selected, each of which included tail swishing, head throwing, and leg stamping behavioral actions. Because other video detection methods cannot effectively identify self-protective behaviors, this article only compares with artificial statistics. When a behavior is artificially marked as one of the three types of self-protective behaviors if the system also recognizes the behavior as a corresponding body behavior it is denoted as TP (True Positive), otherwise as FN (False Negative). When a behavior is not artificially marked as one of the three self-protective behaviors, if the system could not recognize the corresponding self-protective behavior, it is denoted as FP (False Positive); otherwise it is denoted as TN (True Negative). The accuracy rate reflects the proportion of true test results that are really true to all test results, and the recall rate reflects the proportion of results that are correctly detected as true and the results of manual test results are all true. The formula is as follows: Accuracy (precision) = TP (correct detection) / (TP + FP); recall = TP (correct detection) / (TP + FN).

Figure 6 shows the number of tail swishing, head throwing, and leg stamping incidents using the algorithms proposed in this paper and artificial statistics. As can be seen from Figure 6, the accuracy of the detection algorithm in this paper is in the range of [0.88, 1] for tail swishing, head throwing, and leg stamping, and [0.87,1] for recall, indicating that the detection algorithm is close to the artificial statistics.

4.2. Comparison of Detection Effect. Figure 7(a) is an effect diagram of an adaptive threshold frame difference method. The frame difference method can detect moving feature points while a cow is walking, but it cannot effectively distinguish between the characteristic pixel points of a self-protective behavior and a normal motion behavior. Figure 7(b) shows the result using a Gaussian mixture model. Although the pixel feature points of the self-protective behavior were detected using the Gaussian mixture model, erroneous detection due to bodily shaking and jittering could not be avoided. The feature points of the behavior cause a lot of interference, which makes it impossible to effectively track and identify the self-protective behavior.

Figure 7(c) is the result of the improved Shi-Tomasi corner detection optical flow tracking algorithm based on the motion characteristics of the swinging tail proposed in this paper. The box is the tail bounding box; the dot is the tracking feature point, and the line is the tail trace feature. The trajectory of a point can be clearly tracked by using this description method; the position and the trajectory of the tail can be clearly observed, and the pixels having the characteristics of body protection can be effectively traced, which can reduce the number of corner points and the computational complexity of the algorithm to improve the detection accuracy. It is better than the adaptive threshold frame difference method and the Gaussian mixture background model, which could find the characteristics of self-protective behaviors and realize intelligent identification and classification.

4.3. Comparison of Time Complexity. To compare and analyze the running speeds of different detection algorithms, we randomly select 10 groups of videos from the collected 100-hour video. The average running time of each of the 6 different segmentation algorithms is shown in Figure 8.

The average running times of the different algorithms are as follows: direct optical flow method 1835ms; Harris corner optical flow method 1104ms; adaptive frame difference method 214ms; Gaussian mixture model 616ms; Shi-Tomasi corner optical flow algorithm 984ms; and the proposed method 251ms. The results show that the proposed algorithm is stable and effective. The method proposed in this paper saves time and greatly improves efficiency.

5. Conclusion

This paper proposes an improved optical tracking algorithm based on Shi-Tomasi corner detection for intelligent classification and identification of three typical self-protective behaviors in cows. The main conclusions are as follows:

(1) The targets detected in this paper are three typical self-protective behaviors in dairy cows: tail swishing, head throwing, and leg stamping. Feature vectors are extracted based on the characteristics of the movements to find out the outline of the target. Using the artificial neural network to establish a classifier model can effectively classify cows' tail swishing, head throwing, and leg stamping.

(2) By combining the morphological characteristics of head, leg, and tail movements, the interference of background movement is eliminated. At the same time, the number of effective corner points is added in time to ensure the accuracy of detection. The test results show that, compared with the manual statistics, the accuracy rate range of this method is [0.88,1] and the recall rate range is [0.87,1].

(3) Using the method of decreasing nonmoving feature points and setting the tracking trajectory displacement threshold proposed in this paper, the calculation of corner detection is greatly reduced. The experimental results show that the computation time of this method is lower than that of the Shi-Tomasi corner point optical flow algorithm, which could save time and greatly improve efficiency. Blocked self-protective behaviors can be detected more accurately, which shows that the proposed detection algorithm has strong robustness.

The method can automatically and accurately track and detect cows' self-protective behaviors, provide effective and accurate statistics for animal behavior researchers, and help them for further research in this field.

https://doi.org/10.1155/2018/9106836

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] G. X. Qian and W. Z. Zhao, "Investigation and analysis of cow scale farming in Inner Mongolia [J]," China Dairy Cattle, vol. 6, pp. 25-28, 2014.

[2] X. Wei F, "Effects of Diptera Insect Invasion on Flyrepelling, Skin Temperature," Heart Rate and Immune Function of the Cattle [D]. Huhhot:Inner Mongolia Agricultural University, 2014.

[3] M. Li Y, R. Ao, and C. Wang J, Effects of fly activities on the self-protective behavior and cytokines of Holstein cows [J]. Feed Research, 10, 54-57, 2010.

[4] X. Liang, Research on Disservice of Sociality Bunching Behavior Induced by Diptera Insects in Their Active Stage of Holstein Dairy Cow [D], Huhhot:Inner Mongolia Agricultural University, 2009.

[5] L. Zhang M, Cattle Behavioral Response to the Stress Caused by the Diptera Invasion and Its Influence on Neuroendocrine System [D], Huhhot:Inner Mongolia Agricultural University, 2014.

[6] T. Okumura, "The Relationship of Attacking Fly Abundance to Behavioral Responses of Grazing Cattle," Japanese Journal of Applied Entomology and Zoology, vol. 21, no. 3, pp. 119-122, 1977

[7] S. Ma W, G. Zhang Q, and H. Wei W, "Advance the pasture digital technology to realize the fine management of dairy cows [J]," Shandong Journal of Animal Science, pp. 84-84, 2012.

[8] W. Wang, C. Lin, and Y. Zheng, "Experiment and analysis of parameters in particle swarm optimization," Journal of Xihua University, vol. 27, pp. 76-80, 2008 (Chinese).

[9] E. S. Nadimi, H. T. Sogaard, and T. Bak, "ZigBee-based wireless sensor networks for classifying the behaviour of a herd of animals using classification trees," Biosystems Engineering, vol. 100, no. 2, pp. 167-176, 2008.

[10] K. H. Kwong, H. G. Goh, C. Michie, and et al, "Wireless Sensor Networks for Beef and Dairy Herd Management," in Proceedings of the 2008 Providence, Rhode Island, 2008.

[11] P. Martiskainen, M. Jarvinen, J.-P. Skon, J. Tiirikainen, M. Kolehmainen, and J. Mononen, "Cow behaviour pattern recognition using a three-dimensional accelerometer and support vector machines," Applied Animal Behaviour Science, vol. 119, no. 1-2, pp. 32-38, 2009.

[12] D. Liu, K. Zhao, and D. He, "Real-time target detection for moving cows based on Gaussian mixture model," Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, vol. 47, no. 5, pp. 288-294, 2016.

[13] J. Gu, Z. Wang, R. Gao, and H. Wu, "Recognition Method of Cow Behavior Based on Combination of Image and Activities," Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, vol. 48, no. 6, pp. 145-151, 2017.

[14] D. Xiao Q, A. Feng J, and Q. Yang M, "Fast Motion Detection for Pigs Based on Video Tracking [J]," Transactions of the Chinese Society for Agricultural Machinery, vol. 47, no. 10, pp. 351-357, 2016.

[15] K. Zhao, G. Li, and D. He, "Fine Segment Method of Cows' Body Parts in Depth Images Based on Machine Learning," Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, vol. 48, no. 4, pp. 173-179, 2017.

[16] H. Deng B, T. Xu Y, and Y. Zhou C, "Body shape parts recognition of moving cattle based on DRGB [J]," Transactions of the Chinese Society of Agricultural Engineering, no. parts, pp. 05-166, 2018.

[17] H. Wang, A. Klaser, C. Schmid, and C.-L. Liu, "Action recognition by dense trajectories," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 3169-3176, June 2011.

[18] K. Simonyan and A. Zisserman, "Two-stream convolutional networks for action recognition in videos," in Proceedings of the 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014, pp. 568-576, Canada, December 2014.

[19] J. Shi and C. Tomasi, "Good features to track," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593-600, June 1994.

[20] Y. Bouguet J, "Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm [J]," Opencv Documents, vol. 22, no. 2, pp. 363-381, 1999.

Jia Li (iD), (1) Pei Wu (iD), (1) Feilong Kang (iD), (1) Lina Zhang, (1,2) and Chuanzhong Xuan (iD) (1)

(1) College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia Engineering Research Center for Intelligent Facilities in Grass and Livestock Breeding, Hohhot 010018, China

(2) College of Physics and Electronic Information, Inner Mongolia Normal University, Hohhot 010022, China

Correspondence should be addressed to Pei Wu; jdwp@imau.edu.cn

Received 3 May 2018; Accepted 25 September 2018; Published 10 October 2018

Guest Editor: Chen Gong

Caption: Figure 1: The flow chart of the proposed method.

Caption: Figure 2: Tracking of self-protective behavior feature points.

Caption: Figure 3: Detection of self-protective behavior under shielded conditions.

Caption: Figure 4: Cross-entropy curves of different training samples.

Caption: Figure 5: ROC curve after 54 iterations.

Caption: Figure 6: Comparison of accuracy obtained by detection algorithm and manual statistics.

Caption: Figure 7: Comparative analysis of video tracking detection algorithms for difference method.

Caption: Figure 8: Comparison of running speeds for different detection algorithms.
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Li, Jia; Wu, Pei; Kang, Feilong; Zhang, Lina; Xuan, Chuanzhong
Publication:Advances in Multimedia
Article Type:Case study
Date:Jan 1, 2018
Words:4325
Previous Article:Multifeature Fusion Detection Method for Fake Face Attack in Identity Authentication.
Next Article:Hand Detection Using Cascade of Softmax Classifiers.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters