Next Article in Journal
The Effects of Dietary Inclusion of Bilberry and Walnut Leaves in Laying Hens’ Diets on the Antioxidant Properties of Eggs
Previous Article in Journal
Identification of Important Proteins and Pathways Affecting Feed Efficiency in DLY Pigs by iTRAQ-Based Proteomic Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf

1
College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China
2
Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling 712100, China
3
Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service, Yangling 712100, China
4
Department of Poultry Science, College of Agricultural and Environmental Sciences, University of Georgia, Athens, GA 30602, USA
*
Authors to whom correspondence should be addressed.
Animals 2020, 10(2), 190; https://doi.org/10.3390/ani10020190
Submission received: 16 December 2019 / Revised: 15 January 2020 / Accepted: 20 January 2020 / Published: 22 January 2020

Abstract

:

Simple Summary

Requirements for dairy products are increasing gradually in emerging economic bodies such as China, so it is critical to monitor and maintain the health and welfare of the increasing population of dairy cattle, especially dairy calves (over 20% mortality). In this study, a new method was built by combining background-subtraction and inter-frame difference methods to monitor the behaviors of dairy calf. By using the new model and motion characteristics of the calf in different areas of the enclosure, the scene-interactive behaviors of entering or leaving the resting area, turning around, and stationary (no movement) were identified automatically with a 93–97% success rate. This newly developed method provides a basis for inventing evaluation tools to monitor calves’ health and welfare on dairy farms.

Abstract

Requirements for animal and dairy products are increasing gradually in emerging economic bodies. However, it is critical and challenging to maintain the health and welfare of the increasing population of dairy cattle, especially the dairy calf (up to 20% mortality in China). Animal behaviors reflect considerable information and are used to estimate animal health and welfare. In recent years, machine vision-based methods have been applied to monitor animal behaviors worldwide. Collected image or video information containing animal behaviors can be analyzed with computer languages to estimate animal welfare or health indicators. In this proposed study, a new deep learning method (i.e., an integration of background-subtraction and inter-frame difference) was developed for automatically recognizing dairy calf scene-interactive behaviors (e.g., entering or leaving the resting area, and stationary and turning behaviors in the inlet and outlet area of the resting area) based on computer vision-based technology. Results show that the recognition success rates for the calf’s science-interactive behaviors of pen entering, pen leaving, staying (standing or laying static behavior), and turning were 94.38%, 92.86%, 96.85%, and 93.51%, respectively. The recognition success rates for feeding and drinking were 79.69% and 81.73%, respectively. This newly developed method provides a basis for inventing evaluation tools to monitor calves’ health and welfare on dairy farms.

1. Introduction

The global population is predicted to reach 9.5 billion in 2050, then the requirement for the animal protein (e.g., eggs, meat, and milk) is expected to increase by over 70% in 2050 as compared to 2005 [1]. Providing food for the increasing world population with limited natural resources is a grand challenge for animal agriculture. Requirements for dairy products (e.g., milk) are increasing gradually in emerging economic bodies such as China. However, it becomes critical and challenging to maintain the health and welfare of the increasing population of dairy cattle, as the combined mortality rate of dairy calves and replacement heifers in Chinese Holstein cattle was about 21% [2].
In recent years, animal imaging analysis or phenotyping has been tested to monitor animal behavior and health on dairy farms [3,4,5,6,7]. The behaviors of dairy cattle reflect their physiological and welfare conditions, and thus can be applied to improve our understanding of farm animal production, health, and welfare [8]. Traditionally, direct contact method (i.e., attaching sensors to animal body directly) has been the primary form of animal welfare monitoring [9,10,11,12], which may affect animal welfare or health over time. With the enhancement in machine learning technology, images or video information can be analyzed to recognize and classify specific animal behaviors [13,14,15,16,17,18,19].
Progress was made in analyzing animal scene-interactions behaviors such as feeding, drinking, and locomotion [20,21,22]. By analyzing image/video and scene information of the drinking area, the location and drinking behavior of cows was characterized [23]. Based on the top view of a bull barn, Meunier [24] divided the barn layout into eating, walkway, resting, and milking areas. Positional information for dairy cows was obtained with a real-time location system, which generated the real-time behavioral status of cows, and the time period of each individual behavior such as feeding and resting, and interactive behaviors. The behaviors of individual and group calves are different from dairy cows in many ways due to changes in body size, feeding/drinking, and locomotion, etc., which guaranty the study in devising a new monitoring system. As about 50%–60% of dairy production mortality occurs at the calf stage, calves’ management determines the general economic performance of a dairy farm [25,26]. Therefore, monitoring the interactive behaviors of dairy calves in their living scene (e.g., calf pen) will provide health and welfare information for producers to improve the early stage management of on-farm dairy production.
The objectives of this study were to (1) improve the background model development for analyzing dairy calf behaviors based on collected image/video information in a scene/pen; and (2) test the newly developed method for recognizing dairy calf’s behaviors of entering/leaving, rest, drinking/feeding in pen.

2. Materials and Methods

2.1. Experimental Setup and Image Collection

Two cameras were set up (DS-2CD4012, Hikvision, Hangzhou, China) for video/image collection on a commercial dairy farm (Keyuan Clone Ltd., Yangling, China) for monitoring a two-month old Holstein dairy calf in a rectangular fenced enclosure (4 × 2 × 1.5 m). Experimental setup is shown in Figure 1. The Camera A on the length side of the fence monitored the calf activity from the side with a wide angle of view. The vertical height of Camera A was the half height of fence (i.e., 0.75 m), and the horizontal distance to the fence was set to cover the whole activity area of the calf. Camera B was positioned at a height of 1.8 m on the short side of the fence, inclined slightly downwards. Calf’s eating and drinking behaviors were monitored, as shown in Figure 1 and Figure 2. Image/video data were collected from 07:00 to 18:00 h each day in July 2013. A single video file was generated per day. The video was captured at 25 frames/s, 2000 kb/s, and with a resolution of 704 pixels (horizontal) × 576 pixels (vertical) (in PAL format). The data-processing computer consisted of a CPU (Intel Core I5-2400, 3.2 GHz) with 8 GB memory and a 500 GB hard disk. Sample data were read and processed using MATLAB 2014b.

2.2. Calf-Target Detection Method

Common target-detection methods include the inter-frame difference [27], the background-subtraction [28], the Gaussian Mixture Model [29], and the ViBe [30], etc. The Gaussian Mixed Model and ViBe methods have been used to detect moving targets, but were not efficient enough in monitoring the stationary status of animals. The background-subtraction method is able to detect stationary targets, but is susceptible to background interference. The inter-frame difference method has stronger anti-interference properties but cannot detect stationary targets. In this study, the background-subtraction and inter-frame difference methods were integrated to rebuild the background model for individual calf detection. Images processing steps include:
(1) Median filtering was performed on the video frames; RGB images were converted to grayscale images, then the background frame was selected, background subtraction was performed, and small areas were removed;
(2) Otsu’s method was used to segment the image. A square 4 × 4 pixel element was selected for the closing operation and hole filling;
(3) The top, bottom, left, and right borders of the non-zero region were expanded outward by five pixels to obtain new borders of a search box that contained as much as possible of the target region. If the border of the search box overlapped with the image border, the image border was considered to be the border of the search box
{ U e n d = U t e s t 5 ,   i f   U t e s t 5 0 , U e n d = 0 D e n d = D t e s t + 5 ,   i f   D t e s t + 5 576 , D e n d = 576 L e n d = L t e s t 5 ,   i f   L t e s t 5 0 , L e n d = 0 R e n d = R t e s t + 5 ,   i f   R t e s t + 5 704 , R e n d = 704
where U e n d is the top boundary of the target area, D e n d is the bottom of the target area, L e n d is the left boundary of the target area, R e n d is the right boundary of the target area, U t e s t is the top boundary of the non-zero area, D t e s t is the bottom of the non-zero area, L t e s t is the left boundary of the non-zero area, and R t e s t is the right edge of the non-zero area.
(4) Using the above steps, the target area was detected and proposed, as shown in Figure 3b, and the parts outside the target area were extracted (Figure 3c). The region corresponding to the target region in the previously synthesized background frame (Figure 3d) was extracted (Figure 3e), and a new background frame was synthesized with Figure 3c as the new background image for the next target-detection frame (Figure 3f).

2.3. Features Extraction Method of Calf Scene-Interactive Behaviors

The calf entering or leaving the resting area was recorded on the left of the side-video view. When the right border of the target was in Area A (yellow box in Figure 4), the behavior was defined as entering or leaving the resting area. Feeding and drinking behaviors occurred on the right of the side-video view. When the target’s right border reached Area B (blue box in Figure 4), the front video was acquired, the feeding-basin and drinking-basin areas were extracted, and feeding and drinking behaviors were tested.
For extracting entering or leaving behaviors in resting area, the motion characteristics of the individual calf were combined to establish the following behavior recognition model. As animal behavior was continuous, the characteristic average of 10 consecutive frames was taken as the last feature, as shown in Equations (2)–(6)
{ i = n 5 n + 5 B R ( i ) 10 i = n n + 10 B R ( i ) 10 > 10 , n 6 i = n 5 n + 5 B D ( i ) 10 i = n n + 10 B D ( i ) 10 > 10 , n 6 | i = n 5 n + 5 B L ( i ) 10 i = n n + 10 B L ( i ) 10 | < 30 , n 6 B L ( i ) < 30 , i = n 5
{ | i = n 5 n + 5 B R ( i ) 10 i = n n + 10 B R ( i ) 10 | < 3 , n 6 | i = n 5 n + 5 B D ( i ) 10 i = n n + 10 B D ( i ) 10 | < 3 , n 6 | i = n 5 n + 5 B L ( i ) 10 i = n n + 10 B L ( i ) 10 | < 3 , n 6
{ i = n n + 10 B R ( i ) 10 i = n 5 n + 5 B R ( i ) 10 > 10 , n 6 i = n n + 10 B D ( i ) 10 i = n 5 n + 5 B D ( i ) 10 > 10 , n 6 | i = n 5 n + 5 B L ( i ) 10 i = n n + 10 B L ( i ) 10 | < 30 , n 6 B L ( i ) < 30 , i = n 5
{ { | i = n 5 n + 5 B R ( i ) 10 i = n n + 10 B R ( i ) 10 | < 3 , n 6 i = n 5 n + 5 B D ( i ) 10 i = n n + 10 B D ( i ) 10 > 10 , n 6 i = n n + 10 B L ( i ) 10 i = n 5 n + 5 B L ( i ) 10 > 10 , n 6 o r { i = n n + 10 B R ( i ) 10 i = n 5 n + 5 B R ( i ) 10 > 10 , n 6 i = n n + 10 B D ( i ) 10 i = n 5 n + 5 B D ( i ) 10 > 10 , n 6 | i = n n + 10 B L ( i ) 10 i = n 5 n + 5 B L ( i ) 10 | < 3 , n 6 a n d B L ( i ) < 30 , i = n 5
{ { i = n 5 n + 5 B R ( i ) 10 i = n n + 10 B R ( i ) 10 > 10 , n 6 i = n 5 n + 5 B D ( i ) 10 i = n n + 10 B D ( i ) 10 > 10 , n 6 | i = n n + 10 B L ( i ) 10 i = n 5 n + 5 B L ( i ) 10 | < 3 , n 6 o r { | i = n 5 n + 5 B R ( i ) 10 i = n n + 10 B R ( i ) 10 | < 3 , n 6 i = n n + 10 B D ( i ) 10 i = n 5 n + 5 B D ( i ) 10 > 10 , n 6 i = n 5 n + 5 B L ( i ) 10 i = n n + 10 B L ( i ) 10 | > 3 , n 6 a n d B L ( i ) < 30 , i = n 5
where B R ( i ) is the right border of the target area in the i-th frame, B D ( i ) is the distance between the left and right borders of the target area in the i-th frame, and B L ( i ) is the left border of the target area in the i-th frame.
As the calf’s resting area was dark and the calf was black and white, the black parts of the calf that overlapped with the resting area could be lost during target detection when the calf entered or left the resting area. Therefore, we figured out the bias and considered that the calf started to enter or leave the resting area when BL(i) < 30 pixels. Besides this, we experimentally determined that the moving boundary before and after changed by more than 10 pixels when the calf entered, left the resting area, or turned around. The border fluctuation range was less than three pixels when the calf was stationary.
The calf was considered to enter the resting area if three features of the target area satisfied Equation (2). When Equation (3) was satisfied, the calf would be considered as stationary. The calf was considered to leave the resting area when Equation (4) was satisfied. When Equations (5) or (6) was satisfied, the calf would be considered as turning around.

2.4. Feeding and Drinking Behaviors Monitoring and Analysis

Background subtraction in the grayscale image was used to detect if there was a calf in the feeding/drinking area. If no calf was present, the current frame would be taken as a new background frame to continue the detection until the calf appeared. During this period, median filtering was used to pre-process data, and Otsu’s method was used for segmentation [31]. When the calf was eating, the head extended into the feeding basin. The bottom border of the acquired target area corresponded to the bottom border of the basin mouth, and the target had a larger area. The bottom border of the basin was denoted as Df, the bottom border of the target area was Dt, and the area of the target was S. Considering the variability in the boundary of the target area, a threshold value of Df − 5 was used in the test. After the experiment, the area threshold was set as 1950 pixels.
{ D t D f 5 ,   D f = 48 S 1950
where Df is the bottom border of the basin; Dt is the bottom border of the target area; and S is the proportion of the target area. When the detection area satisfied Equation (7), the calf was considered to be feeding. Otherwise, it was considered as not feeding.

3. Results and Discussion

3.1. Target Detection Results

The selected videos that included the calf in the resting and activity areas totaled 20,640 frames. The experiments were performed using the inter-frame difference method, the background subtraction method, the Gaussian mixture model, ViBe, and the new integrated background model developed in this study. The first column in Figure 5 shows detection of a calf in motion and the second column shows detection of the calf in stationary.
As shown in Figure 5b, the inter-frame difference method had strong noise rejection but it could not detect static and slow-moving targets. The conventional background subtraction method was able to detect most areas of dynamic and static targets but exhibited noise and poor adaptability. The Gaussian mixture model and ViBe had better noise immunity and detected dynamic targets, but were still unable to detect targets with continuous or small-amplitude motions. The new method of the integrated background model included the advantages of the inter-frame difference and conventional background subtraction methods, i.e., strong noise resistance and adaptability, and clearly detected most areas of dynamic and static targets.

3.2. Recognition of Entering/Leaving Behaviors in Resting Area

When the right border of the target was in Area A in Figure 6, identification of the calf entering or leaving the resting area was performed. Monitored Area A was the inlet and outlet of calf’s resting area. Monitored behaviors in Area A include entering the resting area, leaving the resting area, stationary (not moving), and turning around. The right border, left border, and the distance between the two borders were used as classification features. Figure 6 shows four behavioral examples. The extracted characteristic curves are shown in Figure 7.
As shown in Figure 7a, when the calf was approaching the resting area, the right border and the distance between the target’s right and left borders started to decrease, as well as the left border. When the calf was entering the resting area, the left border was essentially unchanged. In Figure 7b, the calf was static in the first 102 frames, where the first three features were more or less unchanged. When the calf was leaving the resting area, the right border and the distance between the target’s right and left borders started to increase. In Figure 6c, the head of the calf was facing the resting area. In the first 480 frames, the target’s right border was unchanged. The left border and the distance between the left and right borders suddenly changed because of a slight twisting of the front half of the calf. After the first 480 frames, the right border started to decrease, then became stable, and finally increased again. The distance between the left and right borders gradually decreased, then increased, and finally the left border gradually increased as the calf turned around.
The video segments containing the behaviors of entering the resting area, leaving the resting area, static behavior, and turning around had a total of 42,950 frames. The recognition rate (as compared to video review manually) are shown in Table 1.
The recognition rates of calf’s entering and leaving behaviors in the inlet and outlet of the resting area, stationary, and turning around were 94.38%, 92.86%, 96.85%, and 93.51%, respectively (Table 1). Failures in recognizing the entering or leaving behaviors were due to their being dark in color and the calf being black and white. The area of the calf that overlapped with the resting area could be missed during target detection. In this study, we used the average of 10 consecutive frames to calculate the characteristic value for recognizing behaviors, so static behavior was occasionally misjudged as entering or leaving the resting area when the calf entered or left the resting area from the static state/stationary. Besides, head swinging also led to misjudgment of the static behavior as turning around. In turning around, detected left and right border information sometimes remained essentially unchanged, with both the forelimbs and hindlimbs moving, resulting in misjudgment or detection.

3.3. Feeding and Drinking Behaviors Identification

When the target’s right border reached the feeding/drinking area, the front video could be acquired. Based on the front video, the feeding-basin area (91 × 91 pixels) and drinking-basin area (251 × 192 pixels) were extracted (Figure 8). A square 4 × 4 pixel element was selected for the closing operation, extraction of the maximum area, and hole filling (Figure 8c,f).
During drinking, the calf’s head area accounted for a large proportion of the field of view. When the calf’s head had just entered the basin, it was not in the drinking state and accounted for a smaller proportion of the view (Figure 9a). In addition, ‘looking’ behavior occurred in the drinking area (Figure 9b), but occupied a small area. In this study, drinking and non-drinking behaviors were distinguished by setting the detected area threshold of St = 2900 pixels. When the proportion of the detected area was greater than St, it was considered to be drinking behavior, otherwise it was considered to be non-drinking behavior.
In total, 1080 frames were sampled in the feeding area and 2045 frames in the drinking area. When the calf’s head was detected in these areas, the characteristics of the target area were extracted and used to identify whether the calf was feeding or drinking. The recognition accuracy was estimated in as: TP/(TP + TN), where TP is the number of samples that were correctly identified and TN is the number of samples that were erroneously identified; for feeding and drinking behaviors, these were 79.69% and 81.73%, respectively.
Problems with the recognition of feeding behavior could occur if the calf’s head was stationary in the feeding-basin before and after eating, or if the shadow of the calf’s head was mistakenly recognized as a feeding behavior. Feeding is a continuous process and it was difficult to separate pre- and post-feeding behavior from feeding behavior during the study. Licking the basin edge and the smelling the basin resulted in failures in the identification of drinking behaviors. Besides, only one calf was used to test the newly developed method, because commercial dairy farms usually put only one calf in a pen. As some farms may put a number of calves in a large pen, future studies will be required to optimize the method for the behavior-tracking of individual and groups of calves on commercial dairy farms.
In this study, only the daytime video was recorded. In the future, we will further develop the algorithm to recognize the behavior of calves at night. Besides animal behavior monitoring with 2D cameras, other non-invasive/remote monitoring technologies (e.g., heart rate monitor and infrared thermal) can also be added to the existing system to expend the functions or increase the accuracy of the dairy calf behavior monitoring system.

4. Conclusions

In this study, a new method (i.e., Integrated Background Model) was built by combining background-subtraction and inter-frame difference methods to monitor the behaviors of the dairy calf. By using the new model and motion characteristics of the calf in different areas of the enclosure, we successfully identified the behaviors of entering the resting area (94.38%), leaving the resting area (92.86%), remaining stationary (96.85%), turning around (96.85), feeding (79.69%), and drinking (81.73%).
The new method was tested with satisfied detection performance such as anti-interference characteristics for both dynamic and static targets, as compared to inter-frame difference and the background subtraction methods, Gaussian Mixture Model, and ViBe model. This newly developed method provides a basis for inventing evaluation tools to monitor calves’ health and welfare on dairy farms.

Author Contributions

D.H. was Project PI. Y.G. and D.H. designed experiment and conducted the field study. Y.G. and D.H. tested the method. Y.G. and L.C. analyzed the data and wrote the manuscript. L.C. submitted the manuscript to journal for review. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This work was supported by the general program from the National Natural Science Foundation of China (grant number 61473235) and the National Key Technology R&D Program of China (grant number 2017YFD0701603). Lilong Chai thanks the seed grant for international research collaboration from the College of Agricultural and Environmental Sciences, University of Georgia, USA. Yangyang Guo thanks the sponsorship from China Scholarship Council (CSC) to study at the University of Georgia, USA. Authors thank all team members and staff on the dairy farm for their help in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations. World Population Prospects. 2017. Available online: https://population.un.org/wpp/Publications/Files/WPP2017_DataBooklet.pdf (accessed on 15 November 2019).
  2. Zhang, H.; Wang, Y.; Chang, Y.; Luo, H.; Brito, L.F.; Dong, Y.; Shi, R.; Wang, Y.; Dong, G.; Liu, L. Mortality-Culling Rates of Dairy Calves and Replacement Heifers and Its Risk Factors in Holstein Cattle. Animals 2019, 9, 730. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. He, D.J.; Liu, D.; Zhao, K.X. Review of perceiving animal information and behavior in precision livestock farming. Trans. Chin. Soc. Agric. Mach. 2016, 47, 231–244. [Google Scholar]
  4. Chapinal, N.; Tucker, C.B. Validation of an automated method to count steps while cows stand on a weighing platform and its application as a measure to detect lameness. J. Dairy Sci. 2012, 95, 6523–6528. [Google Scholar] [CrossRef] [PubMed]
  5. Hoffmann, G.; Ammon, C.; Rose-Meierhöfer, S.; Burfeind, O.; Heuwieser, W.; Berg, W. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera. Vet. Res. Commun. 2013, 37, 91–99. [Google Scholar] [CrossRef]
  6. Li, D.; Chen, Y.; Zhang, K.; Li, Z. Mounting Behaviour Recognition for Pigs Based on Deep Learning. Sensors 2019, 19, 4924. [Google Scholar] [CrossRef] [Green Version]
  7. Porto, S.M.C.; Arcidiacono, C.; Anguzza, U.; Cascone, G. A computer vision-based system for the automatic detection of lying behaviour of dairy cows in free-stall barns. Biosyst. Eng. 2013, 115, 184–194. [Google Scholar] [CrossRef]
  8. Dell, A.I.; Bender, J.A.; Branson, K.; Couzin, I.D.; de Polavieja, G.G.; Noldus, L.P.J.J.; Pérez-Escudero, A.; Perona, P.; Straw, A.D.; Wikelski, M.; et al. Automated image-based tracking and its application in ecology. Trends Ecol. Evol. 2014, 29, 417–428. [Google Scholar] [CrossRef]
  9. Smith, D.; Rahman, A.; Bishop-Hurley, G.J.; Hills, J.; Shahriar, S.; Henry, D.; Rawnsley, R. Behavior classification of cows fitted with motion collars: Decomposing multi-class classification into a set of binary problems. Comput. Electron. Agric. 2016, 131, 40–50. [Google Scholar] [CrossRef]
  10. GonzaLez, L.A.; Bishop-Hurley, G.J.; Handcock, R.N.; Crossman, C. Behavioral classification of data from collars containing motion sensors in grazing cattle. Comput. Electron. Agric. 2015, 110, 91–102. [Google Scholar] [CrossRef]
  11. Alsaaod, M.; Römer, C.; Kleinmanns, J.; Hendriksen, K.; Rose-Meierhöfer, S.; Plümer, L.; Büscher, W. Electronic detection of lameness in dairy cows through measuring pedometric activity and lying behavior. Appl. Anim. Behav. Sci. 2012, 142, 134–141. [Google Scholar] [CrossRef]
  12. Reith, S.; Brandt, H.; Hoy, S. Simultaneous analysis of activity and rumination time, based on collar-mounted sensor technology, of dairy cows over the peri-estrus period. Livest. Sci. 2014, 170, 219–227. [Google Scholar] [CrossRef]
  13. Ahn, S.J.; Ko, D.M.; Choi, K.S. Cow Behavior Recognition Using Motion History Image Feature. Image Anal. Recognit. 2017, 10317, 626–633. [Google Scholar]
  14. Jabbar, K.A.; Hansen, M.F.; Smith, M.L.; Smith, L.N. Early and non-intrusive lameness detection in dairy cows using 3-dimensional video. Biosyst. Eng. 2017, 153, 63–69. [Google Scholar] [CrossRef]
  15. Poursaberi, A.; Bahr, C.; Nuffel, A.V.; Nuffel, A.V.; Berckmans, D. Original paper: Real-time automatic lameness detection based on back posture extraction in dairy cattle: Shape analysis of cow with image processing techniques. Comput. Electron. Agric. 2010, 74, 110–119. [Google Scholar] [CrossRef]
  16. Porto, S.M.C.; Arcidiacono, C.; Anguzza, U.; Cascone, G. The automatic detection of dairy cow feeding and standing behaviours in free-stall barns by a computer vision-based system. Biosyst. Eng. 2015, 133, 46–55. [Google Scholar] [CrossRef]
  17. Gu, J.Q.; Wang, Z.H.; Gao, R.H.; Wu, H.R. Recognition Method of Cow Behavior Based on Combination of Image and Activities. Trans. Chin. Soc. Agric. Mach. 2017, 48, 145–151. [Google Scholar]
  18. Wen, C.J.; Wang, S.S.; Zhao, X.; Wang, M.; Ma, L.; Liu, Y.T. Visual Dictionary for Cows Sow Behavior Recognition. Trans. Chin. Soc. Agric. Mach. 2014, 45, 266–274. [Google Scholar]
  19. Guo, Y.Y.; He, D.J.; Song, H.B. Region detection of lesion area of knee based on colour edge detection and bilateral projection. Biosyst. Eng. 2018, 173, 19–31. [Google Scholar] [CrossRef]
  20. Weissbrod, A.; Shapiro, A.; Vasserman, G.; Edry, L.; Dayan, M.; Yitzhaky, A.; Hertzberg, L.; Feinerman, O.; Kimchi, T. Automated long-term tracking and social behavioural phenotyping of animal colonies within a semi-natural environment. Nat. Commun. 2013, 4, 2018. [Google Scholar] [CrossRef] [Green Version]
  21. Lao, F.D.; Teng, G.H.; Li, J.; Yu, L.G.; Li, Z. Behavior recognition method for individual laying hen based on computer vision. Trans. Chin. Soc. Agric. Eng. 2012, 28, 157–163. [Google Scholar]
  22. Yang, Q.M.; Xiao, D.Q.; Zhang, G.X. Pig Drinking Behavior Recognition Based on Machine Vision. Trans. Chin. Soc. Agric. Mach. 2018, 49, 232–238. [Google Scholar]
  23. Benvenutti, M.A.; Coates, T.W.; Imaz, A.; Flesch, T.K.; Hill, J.; Charmley, E.; Hepworth, G.; Chen, D. The use of image analysis to determine the number and position of cattle at a water point. Comput. Electron. Agric. 2015, 118, 24–27. [Google Scholar] [CrossRef]
  24. Meunier, B.; Pradel, P.; Sloth, K.H.; Cirié, C.; Delval, E.; Mialon, M.M.; Veissier, I. Image analysis to refine measurements of dairy cow behaviour from a real-time location system. Biosyst. Eng. 2018, 173, 32–44. [Google Scholar] [CrossRef]
  25. Zhao, X.W. Prevention and Control Measures for the Frequent Diseases of Newborn Calves. Shandong J. Anim. Sci. Vet. Med. 2014, 35, 52–53. [Google Scholar]
  26. He, D.J.; Meng, F.C.; Zhao, K.X.; Zhang, Z. Recognition of Calf Basic Behaviors Based on Video Analysis. Trans. Chin. Soc. Agric. Mach. 2016, 47, 294–300. [Google Scholar]
  27. Zhao, K.X.; He, D.J. Target detection method for moving cows based on background subtraction. Int. J. Agric. Biol. Eng. 2015, 8, 42–49. [Google Scholar]
  28. Yin, X.; Wang, B.; Li, W.; Liu, Y.; Zhang, M. Background Subtraction for Moving Cameras based on trajectory-controlled segmentation and Label Inference. KSII Trans. Internet Inf. 2015, 9, 4092–4107. [Google Scholar]
  29. Hua, Y.; Liu, W.; University, L.T. Moving object detection algorithm of improved Gaussian mixture model. J. Comput. Appl. 2014, 34, 580–584. [Google Scholar] [CrossRef] [Green Version]
  30. Ye, Y.; Cao, M.; Feng, Y. EVibe: An improved Vibe algorithm for detecting moving objects. Chin. J. Sci. Instrum. 2014, 35, 924–931. [Google Scholar]
  31. Otsu, N. A threshold selection method from gray level histograms. IEEE Trans. Syst. Man Cybern. 1979, 62–66. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Video collection setup. 1. Rest area; 2. Activity area; 3. Feeding-basin; 4. Drinking-basin. A. Front-view camera; B. Side-view camera.
Figure 1. Video collection setup. 1. Rest area; 2. Activity area; 3. Feeding-basin; 4. Drinking-basin. A. Front-view camera; B. Side-view camera.
Animals 10 00190 g001
Figure 2. Test site. Red canvas was used to reduce background interference to side-view.
Figure 2. Test site. Red canvas was used to reduce background interference to side-view.
Animals 10 00190 g002
Figure 3. Model of background updating: a. target test results and the red box represents target area; b. target area; c. background area after removing target area; d. previous synthetic background frame; e. background area corresponding to target area; and f. synthetic background frame.
Figure 3. Model of background updating: a. target test results and the red box represents target area; b. target area; c. background area after removing target area; d. previous synthetic background frame; e. background area corresponding to target area; and f. synthetic background frame.
Animals 10 00190 g003
Figure 4. Calf scene-interactive behavior detection area. Calf’s behaviors of entering and leaving resting area were detected in Area A (yellow box: horizontal distance is body length of calf; resting area located on left of A zone). Feeding and drinking behaviors were detected in Area B (blue box: left border is left border of feeding-basin).
Figure 4. Calf scene-interactive behavior detection area. Calf’s behaviors of entering and leaving resting area were detected in Area A (yellow box: horizontal distance is body length of calf; resting area located on left of A zone). Feeding and drinking behaviors were detected in Area B (blue box: left border is left border of feeding-basin).
Animals 10 00190 g004
Figure 5. Dairy calf detection with different methods. The left column shows the detection results of different methods of calves in motion. The right column is the detection results of different methods of calf in stationary. bf corresponds to the detection results of the frame difference method, background subtraction method, gaussian mixture model, ViBe, and the method of this paper.
Figure 5. Dairy calf detection with different methods. The left column shows the detection results of different methods of calves in motion. The right column is the detection results of different methods of calf in stationary. bf corresponds to the detection results of the frame difference method, background subtraction method, gaussian mixture model, ViBe, and the method of this paper.
Animals 10 00190 g005aAnimals 10 00190 g005b
Figure 6. Example behaviors in Area A: (a) the calf is entering the resting area; (b) the calf is leaving the resting area; (c) calf is stationary; and (d) calf is turning around.
Figure 6. Example behaviors in Area A: (a) the calf is entering the resting area; (b) the calf is leaving the resting area; (c) calf is stationary; and (d) calf is turning around.
Animals 10 00190 g006
Figure 7. Extraction of behavioral characteristics: (a) entering resting area; (b) stationary and leaving resting area; and (c) stationary and turning around.
Figure 7. Extraction of behavioral characteristics: (a) entering resting area; (b) stationary and leaving resting area; and (c) stationary and turning around.
Animals 10 00190 g007aAnimals 10 00190 g007b
Figure 8. Feeding-basin and drinking-basin areas. (a) Feeding basin; (b) Calf feeding; (c) Result of binary image acquisition; (d) Drinking basin; (e) Calf drinking; (f) Result of binary image acquisition.
Figure 8. Feeding-basin and drinking-basin areas. (a) Feeding basin; (b) Calf feeding; (c) Result of binary image acquisition; (d) Drinking basin; (e) Calf drinking; (f) Result of binary image acquisition.
Animals 10 00190 g008
Figure 9. Target behavior in drinking area. (a) Preparing for drinking; (b) Looking around; (c) Drinking.
Figure 9. Target behavior in drinking area. (a) Preparing for drinking; (b) Looking around; (c) Drinking.
Animals 10 00190 g009
Table 1. Results of calf’s behaviors recognition rate (%).
Table 1. Results of calf’s behaviors recognition rate (%).
Actual BehaviorClassification Results
Entering the Resting AreaLeaving the Resting AreaStillTurningMissed Detection
(1) Entering the resting area94.38---5.62
(2) Leaving the resting area-92.86--7.14
(3) Static behavior0.430.5796.851.430.72
(4) Turning around--5.5693.510.93
Note: The recognition rate refers to the ratio of the number of correctly identified frames to the total number of frames in a behavior sample, and the ratio of the number of misclassified frames to the total number of frames in the behavior.

Share and Cite

MDPI and ACS Style

Guo, Y.; He, D.; Chai, L. A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf. Animals 2020, 10, 190. https://doi.org/10.3390/ani10020190

AMA Style

Guo Y, He D, Chai L. A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf. Animals. 2020; 10(2):190. https://doi.org/10.3390/ani10020190

Chicago/Turabian Style

Guo, Yangyang, Dongjian He, and Lilong Chai. 2020. "A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf" Animals 10, no. 2: 190. https://doi.org/10.3390/ani10020190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop