Next Article in Journal
Vibration Suppression of Wind/Traffic/Bridge Coupled System Using Multiple Pounding Tuned Mass Dampers (MPTMD)
Previous Article in Journal
Artificial Intelligence-Assisted Heating Ventilation and Air Conditioning Control and the Unmet Demand for Sensors: Part 1. Problem Formulation and the Hypothesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model

College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(5), 1132; https://doi.org/10.3390/s19051132
Submission received: 28 January 2019 / Revised: 27 February 2019 / Accepted: 28 February 2019 / Published: 6 March 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
Illumination in the natural environment is uncontrollable, and the field background is complex and changeable which all leads to the poor quality of broccoli seedling images. The colors of weeds and broccoli seedlings are close, especially under weedy conditions. The factors above have a large influence on the stability, velocity and accuracy of broccoli seedling recognition based on traditional 2D image processing technologies. The broccoli seedlings are higher than the soil background and weeds in height due to the growth advantage of transplanted crops. A method of broccoli seedling recognition in natural environments based on Binocular Stereo Vision and a Gaussian Mixture Model is proposed in this paper. Firstly, binocular images of broccoli seedlings were obtained by an integrated, portable and low-cost binocular camera. Then left and right images were rectified, and a disparity map of the rectified images was obtained by the Semi-Global Matching (SGM) algorithm. The original 3D dense point cloud was reconstructed using the disparity map and left camera internal parameters. To reduce the operation time, a non-uniform grid sample method was used for the sparse point cloud. After that, the Gaussian Mixture Model (GMM) cluster was exploited and the broccoli seedling points were recognized from the sparse point cloud. An outlier filtering algorithm based on k-nearest neighbors (KNN) was applied to remove the discrete points along with the recognized broccoli seedling points. Finally, an ideal point cloud of broccoli seedlings can be obtained, and the broccoli seedlings recognized. The experimental results show that the Semi-Global Matching (SGM) algorithm can meet the matching requirements of broccoli images in the natural environment, and the average operation time of SGM is 138 ms. The SGM algorithm is superior to the Sum of Absolute Differences (SAD) algorithm and Sum of Squared Differences (SSD) algorithms. The recognition results of Gaussian Mixture Model (GMM) outperforms K-means and Fuzzy c-means with the average running time of 51 ms. To process a pair of images with the resolution of 640×480, the total running time of the proposed method is 578 ms, and the correct recognition rate is 97.98% of 247 pairs of images. The average value of sensitivity is 85.91%. The average percentage of the theoretical envelope box volume to the measured envelope box volume is 95.66%. The method can provide a low-cost, real-time and high-accuracy solution for crop recognition in natural environment.

1. Introduction

Broccoli is rich in nutrients and has a wide area of planting in China. However, in the field environment, the weeding and pesticide spraying of broccoli are still mainly manual. Therefore, it is urgent to develop intelligent weeding robot and target spraying equipment in China. The recognition and location of broccoli seedlings based on machine vision plays a decisive role in the development of intelligent weeding robots and target spraying equipment.
However, the natural field environment is unstructured and the illumination conditions are uncontrollable. The background of the soil is complex and changeable. The colors of weeds and crop are close, especially under weedy conditions. Besides, traditional 2D image processing technology has inherent defects, so it is hard for traditional 2D image processing technologies to identify crops in natural fields accurately and stably. In recent years, with the development and application of high performance computers, many experts and scholars have begun to explore the application of 3D stereo information in agriculture. The acquisition methods or equipment of 3D stereo information of plant in field can be divided into binocular stereo vision [1,2], multi vision [3,4], RGB-D camera [5,6], structured light [7], multispectral 3D vision system [8,9], laser scanning [10,11], etc. As 3D stereo information contains both RGB and position information of crop, it can be widely used in the identification [12] and positioning [13,14] of plants, phenotypic parameter acquisition [15,16] and so on.
In terms of the application of stereo vision in broccoli seedling recognition, Li et al. [17] introduced a method to identify broccoli seedlings and green bean plants based on 3D imaging under weedy condition. Firstly, images of broccoli seedling and green bean plants were taken from field via a 3D time-of-flight camera and sparse noise points were filtered out by means of height threshold. Then both 2D and 3D features were extracted to recognize broccoli seedling and green bean plants, but artificial threshold was needed throughout the whole plants identification process. Andujar et al. [18] used depth cameras to get structural parameters of broccoli under laboratory environment conditions to assess the growth state and yield of broccoli.
The 3D point cloud contains not only the RGB information of the plants but also their spatial position information, so in addition to broccoli seedlings, stereo vision technologies can be also applied to identification and localization of other crops in the field, in plant factorys or in the greenhouse. Avendano et al. [19] collected videos of coffee branches with a portable image acquisition device. Structure from Motion (SFM) and Patch Multi-View Stereo (PMVS) were developed to get the 3D point cloud of coffee branches, and Support Vector Machine (SVM) was applied to distinguish the six nutrient structures of coffee branches. Nguyen et al. [20] used a RGB-D camera to recognize and locate apples in apple orchard. Firstly, distance and color filters were exploited to remove points of leaves, branches, and trunks from original point cloud. Then Euclidean clustering algorithm was utilized in identifying apple points. Punica granatum was recognized and located by use of stereo vision [21]. Wang et al. [22] proposed a method of litchi fruit positioning in natural environment based on binocular stereo vision. Wavelet transform algorithm was used to unify the illumination of image. Then K-means algorithm was used to segment the Litchi fruit, and Normalized Cross-Correlation (NCC) algorithm was developed for stereo matching of Litchi fruit. 3D positioning information of Litchi fruit can be obtained by 3D reconstruction finally. Meanwhile, binocular stereo vision can also be used to identify and locate sweet pepper stems [23] and distinguish crops and weeds in the field [24]. Vazquez-Arellano et al. [25] used a time-of-flight camera for 3D reconstruction and positioning of maize in field. Experimental results show that the average error and standard deviation of the positioning of maize are 3.4 cm and ±1.3 cm, respectively. But the method proposed in this paper has a heavy computation. Mehta et al. [26] made use of multi-vision to recognize and locate fruit in 3D point cloud space.
In recent years, more and more experts and scholars began to explore mounting a stereo vision imaging device on picking robots [27], automatic guided vehicles [28,29], unmanned aerial vehicles [30], and industrial manipulators [31], to acquire 3D models of plants more flexibly and conveniently. Then, these 3D models can be applied for plant organs extraction [32], 3D phenotypic parameters acquisition [33,34], crops monitoring [35], biomass assessment [30], pest detection [8,9], crop yield prediction [36], and crop growth database establishment [37]. All the works above were finally applied to guide agricultural production.
In terms of plants phenotypic parameters acquisition, stereo vision is widely used, because its advantages of non-destructive, non-contact and high precision. Hui et al. [38] proposed a method of plants phenotypic parameters acquisition and plant monitoring based on multi vision under laboratory conditions. Firstly, Multi-View Stereo and Structure from Motion (MVS-SFM) algorithm and VisualSFM software were used to obtain 3D point cloud of plant. Then, plant phenotypic parameters were calculated in 3D point cloud space. Hausdorff distance was calculated between phenotypic parameters and laser scanning. The results showed that the method introduced in this paper had a high precision. Li et al. [39] built a portable, low-cost binocular stereo vision system integrated by a network camera and a 3D time-of-flight (ToF) camera which can be applied for phenotypic parameters extraction of maize under laboratory conditions. An et al. [40] developed a system composed with 18 cameras to obtain images of plants in the greenhouse and the Agisoft PhotoScan software was used for 3D point cloud reconstruction. Then, the crop phenotypic parameters were acquired which were finally exploited for crop monitoring, but the system applied in this article is high cost and the method proposed in this paper has a heavy computation burden. Bao et al. [41,42] developed an imaging acquisition system composed of six pairs of binocular cameras for phenotypic parameter acquisition of sorghum crops in the field. A Semi-Global Matching (SGM) algorithm was used for binocular stereo matching in this paper. A hyperspectral pushbroom sensor unit was used for hyperspectral images acquisition of crop, and a perceptron laser triangulation scanner was applied for crop 3D modelling. Then, hyperspectral 3D plant models were obtained by fusing the spectral information and spatial information, and the models were used for crop phenotypic parameters acquisition, crop lesion identification, and crop tissue classification [43], finally. Golbach et al. [44] developed a system composed of 10 cameras for seedling phenotypic parameter measurement and a shape-from-silhouette method was exploited for 3D point cloud reconstruction.
Santos et al. [45] used a handheld camera for crop image acquisition. Then a Multi-View Stereo and Structure from Motion (MVS-SFM) algorithm was developed to get crop 3D models and a spectral clustering algorithm was exploited for single blade identification. Finally, phenotypic parameters of single blades can be measured in 3D point cloud space. An optical sensor Artec Spider 3D scanner and 3D-Bunch-Tool software were exploited for obtaining phenotypic parameters of grapefruit under laboratory conditions [46]. Moriondo et al. [47] developed a method for phenotypic parameter acquisition of olive leaves based on stereo vision technologies under laboratory conditions. Agisoft PhotoScan software and the Structure from Motion (SFM) algorithm were proposed for olive tree 3D point cloud reconstruction, and a Random Forest algorithm was used to segment olive leaves points from olive 3D point cloud. Then a label connected components algorithm (CCA) was sued to identify single olive leaf. Finally, leaf area, leaf inclination, and leaf azimuth of each olive can be obtained. Duan et al. [48] used stereo vision technologies for 3D point cloud reconstruction and phenotypic parameters acquisition of wheat crop. Chaivivatrakul et al. [49] used stereo vision technologies for 3D point cloud reconstruction and phenotypic parameters acquisition of maize under laboratory conditions. Rose et al. [50] introduced a method for tomato fruit phenotypic parameter measurement under laboratory conditions based on a laser scanner. Pix4DMapper software and Multi-View Stereo and Structure from Motion (MVS-SFM) algorithm were made use for 3D point cloud reconstruction. After artificial denoising and segmentation of leaves and stems phenotypic parameters of leaves and stems, were calculated finally.
In summary, due to the extensive application of stereo vision in agriculture, a method of broccoli seedling recognition based on Binocular Stereo Vision and Gaussian Mixture Model was proposed in this paper: (1) Broccoli seedling images were acquired by a portable integrated binocular camera in the field under natural environment conditions; (2) the Matlab calibration toolbox was used for binocular camera calibration to obtain internal and external parameters of the binocular camera; (3) Epipolar rectification; (4) the Semi-Global Matching (SGM) algorithm was exploited to get disparity maps; (5) 3D point cloud reconstruction; (6) invalid point removal; (7) 3D point cloud down-sampling by using a non-uniform grid sample method; (8) broccoli seedling points were recognized by Gaussian Mixture Model Cluster; (9) broccoli seedling points were denoised by means of the k-nearest neighbors (KNN) algorithm. The aim of the method proposed in this paper is to solve the problem of broccoli seedling recognition in the field under natural environment conditions including different exposures, different weed conditions and different camera heights.

2. Materials and Methods

2.1. Image Acquisition and Experiment Platform

The broccoli seedlings were bred on March 25, 2018 and transplanted on April 28, 2018. The broccoli seedling images were acquired from 10:00–12:00 pm, on May 23, 2018, ate the Beijing International Urban Agricultural Science and Technology Park (116°47′57″E, 39°52′7″N). The image acquisition device is an integrated binocular camera (VR (Virtual Reality) Camera, BOBOVR, Shenzhen, China) with a resolution of 1280×480, the frame frequency of 30 fps, CMOS, Fov 120°, the baseline length of 60 mm, the working distance of 500–2000 mm, USB 2.0, 600 RMB. The working platform is OMEN by HP Desktop PC 880-p1xx, 8GB RAM, Inter Core i7-8700 @ 3.20 GHz, Windows 10, 64 bit system (DirectX 12). The software used was MATLAB R2016b (Math Works Corporation, Nattick, MA, USA), and Adobe Photoshop CS6 (64 bit, Adobe, San Jose, CA, USA). A checkerboard calibration board with a square size of 30 mm × 30 mm was used for camera calibration. The broccoli seedling images and binocular camera are shown in Figure 1.

2.2. Methods

2.2.1. Binocular Camera Calibration

MATLAB Stereo Camera Calibrator APP [51] is exploited for binocular camera calibration and the intrinsic parameter matrix, distortion coefficient, essential matrix, fundamental matrix, rotation matrix and translation matrix of binocular camera are obtained, which can be used for binocular stereo rectification and broccoli seedling point cloud reconstruction. Checkerboard calibration board images are shown in Figure 2.

2.2.2. Stereo Rectification

The purpose of stereo rectification is to eliminate the radial distortion and tangential distortion of the binocular images, and make the left and right images satisfy the epipolar constraint. It means that the same point of the same object is on the same horizontal line in the in the rectified images. So that the disparity searching range is changed from 2D planar search to 1D linear search. Bouguet’s stereo rectification method [52] was adopted.

2.2.3. Semi-Global Matching Algorithm and 3D Recognition

The Semi-Global Matching (SGM) algorithm was used for stereo matching which was firstly proposed by Hirschmuller [53] in 2005. The disparity map was obtained after stereo matching. The disparity map and intrinsic parameter matrix of left camera were used for 3D recognition, and the original broccoli seedling point cloud can be obtained finally.

2.2.4. Invalid Points Removal and Down-Sampling

There was a large amount of invalid points in the original point cloud which would be removed after Invalid points removal. Invalid points removal can reduce the point number of broccoli seedling point cloud, and the point cloud would be transformed from ordered point cloud into a disordered one. Then Non-uniform box grid filter [54] was used for down-sampling to reduce the point number of broccoli seedling point cloud.

2.2.5. Gaussian Mixture Model cluster

Gaussian mixture model [55] is a linear combination of multiple Gaussian distribution functions. Let φ = { φ n } ,   n = 1 , 2 , , N , represent the broccoli seedling point cloud obtained by down-sampling, then the Gaussian mixture model can be expressed as:
p ( φ ) = k = 1 K π k Ν ( φ | μ k , Σ k ) ,
Let γ ( z n k ) represent the posterior probability of point φ n , which belongs to the kth cluster. The probability that the k th class is not selected the probability that the k th class is not selected. Then, γ ( z n k ) can be obtained by Bayes’ theorem:
γ ( z n k ) = π k Ν ( φ n | μ k , Σ k ) j = 1 K π j Ν ( φ n | μ j , Σ j ) ,
Theoretically, μ k , Σ k , π k can be obtained by using γ ( z n k ) :
μ k = 1 n = 1 N γ ( z n k ) n = 1 N γ ( z n k ) φ n ,
Σ k = 1 n = 1 N γ ( z n k ) n = 1 N γ ( z n k ) ( φ n μ k ) ( φ n μ k ) T ,
π k = n = 1 N γ ( z n k ) N ,
The steps of Expectation-Maximization algorithm are as follows:
(1)
Let K be the number of the cluster of the broccoli seedling point cloud, and set initial values of π k , μ k , Σ k separately.
(2)
Calculate the posterior probability γ ( z n k ) by using Equation (2) according to the current π k , μ k , Σ k .
(3)
Calculate the new π k n e w , μ k n e w , Σ k n e w by using the Equations (3–5).
(4)
Calculate the logarithmic likelihood function of Equation (1).
l n p ( φ ) = n = 1 N l n [ k = 1 K π k Ν ( φ | μ k , Σ k ) ] ,
(5)
Check whether the parameters π k , μ k , Σ k are convergent or the function (6) is convergent, if not return to (2).
(6)
If converge, calculate posterior probability γ ( z n k ) of each point of broccoli seedling point cloud separately, and then categorize the point to the cluster, where γ ( z n k ) has the maximum value.

2.2.6. Outlier Filtering by K-Nearest Neighbors (KNN) Algorithm

Broccoli seedling points would be recognized by using the Gaussian Mixture Model cluster. But there were still some outliers in the recognized broccoli seedling points, so the K-Nearest Neighbors (KNN) algorithm [56] would be exploited for outlier filtering, then an ideal broccoli seedling point cloud would be acquired finally.

3. Results

3.1. Stereo Rectification Analysis

Figure 3 shows that the distortion of original RGB images was eliminated in rectified images. The broccoli seedling occupies less area in rectified images because of the interpolation operation and image cutting in the stereo rectification process. After stereo rectification, the resolution of the image changes from 640 × 480 to 791 × 547.

3.2. Stereo Matching Results Analysis

As shown in Figure 4a–c, disparity maps can be obtained by the SGM algorithms. Figure 4a,c shows that the broccoli seedling regions were matched smoothly, and the marginal parts were also preserved completely. In Figure 4b, due to the camera overexposure, there are some mismatched areas in the upper and left blades of the broccoli seedlings, but the boundary between broccoli seedlings and background was matched clearly, and for this reason, these mismatched areas did not affect the next reconstruction and identification of broccoli seedling.
As shown in Figure 4d–i, disparity maps obtained by the SSD algorithm [57] are superior to disparity maps obtained by SAD algorithm [57], but their quality are both inferior to disparity maps obtained by SGM algorithm, because of the un-smoothness, ambiguous boundaries between broccoli seedling and background, and large noise areas of background shown as the orange, red and yellow areas. The matching window size, maximum disparity and matching time of SAD, SSD, SGM are shown in Table 1.
As shown in Table 1, as far as SAD algorithm and SSD algorithm are concerned, to obtain an ideal disparity map, a large matching window is required, so the matching window size of 55×55 pixel was selected. As can be seen in Figure 5, when the matching window size is 55×55 pixel and maximum disparity is 130 pixel, the best matching disparities for the SAD algorithm at point (400,400) pixel, are 89 pixel, 75 pixel, 45 pixel and for SSD algorithm are 90 pixel, 75 pixel, 45 pixel respectively. Therefore, in order to obtain ideal disparity maps of images with different shooting heights, a larger maximum disparity value should be selected. However, when the matching window size is certain, the operation time of SAD algorithm and SSD algorithm will increase with the increase of the maximum disparity. As shown in Table 1, when the matching window size is 55×55 pixel and the maximum disparities are 130 pixel, 120 pixel and 110 pixel respectively for images in Figure 1, the average operation time of SAD algorithm is 1475 ms and of SSD algorithm is 1498 ms.
As far as the SGM algorithm is concerned, when the matching window size is 15 × 15 pixel, the maximum disparity is 128 pixel, the proposed SGM algorithm can satisfy different weed conditions, different shooting heights, and different exposure intensities, and ideal disparity maps can be obtained in real-time. The average operation time of SGM algorithm is 138ms which only accounts for 9.36% of the SAD algorithm, and 9.21% of SSD algorithm.

3.3. Reconstruction, Invalid Points Removal and Down-Sampling Results Analysis

As shown in Figure 6a–c, broccoli seedlings, weeds, pipelines, and soil were reconstructed successfully, and the height advantage of broccoli seedling is highlighted. But because of the matching error of SGM algorithm, there are still some outliers and invalid points in the original 3D point cloud.
The points of Inf were removed from original point cloud by an invalid point removal operation, at the same time, the ordered point cloud is transformed into a disorder one. As shown in Table 2, the number of points dropped from 430,000 to around 300,000 after invalid point removal. The point number of point cloud is significantly reduced, but maps in Figure 6a–c and Figure 6d–f look the same, that is because only valid points of 3D point cloud can be displayed. As can be seen in Figure 6g–i there is a distinct black boundary between broccoli seedlings and background. The reason is that the imaging principle of the camera is small hole imaging principle. When imaging, the broccoli seedling will occupy a larger region in the image plane, and a part of the background will be blocked by broccoli seedlings, because the broccoli seedlings are closer to the camera. After reconstructing, the size of broccoli seedlings become the actual size, and the occluded part becomes a black boundary region between the broccoli seedlings and background. This makes clustering and recognition of broccoli seedling points feasible and simple.
A non-uniform grid sample algorithm was adopted for point cloud down-sampling and the sparse point clouds were obtained, shown in Figure 6j–l. The point number of the sparse point cloud is 4,096, but all the characteristics of dense point cloud were completely preserved. The reduction of the point number will also reduce the computation time of the GMM algorithm simultaneously.

3.4. Broccoli Seedling Points Clustering and Recognition Results Analysis

As is shown in Figure 7a–c the GMM algorithm can recognize the broccoli seedling points from the point clouds in Figure 6j–l completely, when the component number of GMM is 10. Only broccoli seedlings can be recognized in Figure 7e by the K-means algorithm [58], and in Figure 7d,f there are a large amount of background points.
As can be seen in Figure 7g,h, the broccoli seedling points can be recognized by the Fuzzy c-means algorithm [59], but in Figure 7h there are still a few background points. In Figure 7i, the recognition of broccoli seedling points by Fuzzy c-means failed. As can be seen in Figure 7f,i, the broccoli seedling points can’t be recognized by either the K-means algorithm or the Fuzzy c-means algorithm, when the shoot height is the highest, and the broccoli seedling occupies a smaller area in the image plane. In brief, the GMM algorithm is superior to the K-means algorithm and the Fuzzy c-means algorithm in terms of broccoli seeding recognition effect. There are still some outliers in broccoli seedling points recognized by GMM algorithm. Therefore, the KNN algorithm was used for outlier filtering. As shown in Figure 7j–l, all of the outliers were removed from broccoli seedling points and the detail of broccoli seedling was preserved fully, which can be seen in the red ellipse in Figure 7j,l.
As can be seen from Table 3, the average computation time of GMM algorithm is 51 ms, of K-mean algorithm is 8 ms, and of Fuzzy c-means is 173 ms which is 3.39 times longer than the GMM algorithm. The average computation time of K-mean algorithm is the shortest. However, the K-means algorithm is susceptible to outliers and has poor stability.
To further illustrate the stability of the GMM algorithm, 10 times clustering for each broccoli seedling were taken and the π k of the broccoli seedling component was obtained. A line diagram was drawn as shown in Figure 8. The standard deviation of the three sets of π k is 5.54 × 10−4, 6.24 × 10−4, 1.85 × 10−3 respectively, which shows the values of each set of π k have a small change. The three lines in Figure 8 are close to horizontal line, which shows the GMM algorithm has good stability.

3.5. Completeness of Broccoli Seedling Recognition

Furthermore, to illustrate the completeness of broccoli seedling recognition by the method proposed in this paper, considering that the application of machine vision of intelligent weeding robot and the target spraying equipment is crop identification and setting a protected area or a spraying area around the crop, sensitivity [60] was selected. Images were segmented manually, and the top view of broccoli seeding points obtained by the GMM algorithm are shown as Figure 9. The manual pixel area, the theoretical pixel area, and the intersection area are shown in Table 4. The average value of sensitivity is 85.91%, so the proposed method has a good completeness of broccoli seeding recognition.

3.6. Measured and Theoretical Envelope Box Volumes

To further illustrate the effectiveness and the universal adaptability of the algorithm proposed in this paper. A measured and theoretical envelope box of cabbage are shown in Figure 10, and the values of measured and theoretical envelope box volume are shown in Table 5. As can be seen in Table 4, the volumes of the measured and theoretical envelope box are very close, and the average percentage of theoretical volume to measured volume is 95.66%. The percentage of Figure 9b is 83.61%, because only the canopy height of a plant can be obtained by the proposed method, and the measured height of plant is the height of the plant to the ground. For all plants, the theoretical length and width are very close to the measured length and width, so the proposed method has a high canopy parameter acquisition precision.

4. Conclusions

(1)
A method of broccoli seedling recognition was proposed in this paper, which is based on Binocular Stereo Vision and Gaussian Mixture Model clustering, under different weed conditions, different shooting heights, and different exposure intensities in a natural field. The method was proposed for the rapid identification of transplanted broccoli seedlings with growth advantage. The experimental results of 247 pairs of images proved that correct recognition rate of this method is 97.98%, and the average operation time to process a pair of original images with the resolution of 640×480 was 578 ms. The average value of sensitivity is 85.91%. For cabbage planta the average percentage of the theoretical envelope box volume to the measured envelope box volume is 95.66%.
(2)
The SGM algorithm was introduced for a pair of broccoli seedling images with the resolution of 791×547 after stereo rectification. The SGM algorithm was compared with the SAD algorithm and the SSD algorithm. The SGM algorithm can meet the matching requirements of all broccoli seedling images, when the matching window size was 15×15 pixel and the maximum disparity was 128 pixel. The operation time of SGM algorithm was 138 ms. The experimental results showed that SGM algorithm is superior to SAD algorithm and SSD algorithm.
(3)
The GMM cluster was adopted for recognizing broccoli seedling points rapidly and stably. The experimental results showed that the proposed GMM algorithm was better than the K-means algorithm and the fuzzy c-means algorithm on recognition effect and stability. The average calculation time of the GMM algorithm was only 51 ms which satisfied the real-time requirements. The KNN algorithm was used for outliers filtering of broccoli seedling points recognized by GMM cluster, and complete and pure broccoli seedling was recognized finally.

Author Contributions

Conceptualization, L.G., C.Z., Y.T. and W.L.; Data curation, Z.Y., Z.S., G.Z. and M.Z.; Formal analysis, L.G. and Z.Y.; Funding acquisition, Y.T.; Investigation, L.G., Z.Y., Z.S., G.Z., M.Z., K.Z. and C.Z.; Methodology, L.G., C.Z., Y.T. and W.L.; Project administration, Y.T.; Resources, L.G., Z.Y., Z.S., Y.T. and W.L.; Software, G.Z.; Supervision, Z.Y., Z.S., M.Z., K.Z., C.Z., Y.T. and W.L.; Validation, L.G.; Visualization, L.G. and Z.Y.; Writing—original draft, L.G.; Writing—review & editing, L.G.

Funding

This work was supported by The National Key Research and Development Program of China (2017YFD0701303).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, C.; Tang, Y.; Zou, X.; Luo, L.; Chen, X. Recognition and Matching of Clustered Mature Litchi Fruits Using Binocular Charge-Coupled Device (CCD) Color Cameras. Sensors 2017, 17, 2564. [Google Scholar] [CrossRef] [PubMed]
  2. Li, H.; Luo, M.; Zhang, X. 3D Reconstruction of Orchid Based on Virtual Binocular Vision Technology. In Proceedings of the International Conference on Information Science and Control Engineering, Changsha, Hunan, China, 21–23 July 2017; pp. 1–5. [Google Scholar]
  3. Stein, M.; Bargoti, S.; Underwood, J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors 2016, 16, 1915. [Google Scholar] [CrossRef] [PubMed]
  4. Kaczmarek, A.L. Improving depth maps of plants by using a set of five cameras. J. Electron. Imaging 2015, 24. [Google Scholar] [CrossRef]
  5. Wang, Z.; Walsh, K.B.; Verma, B. On-Tree Mango Fruit Size Estimation Using RGB-D Images. Sensors 2017, 17, 2738. [Google Scholar] [CrossRef] [PubMed]
  6. Andujar, D.; Dorado, J.; Fernandez-Quintanilla, C.; Ribeiro, A. An approach to the use of depth cameras for weed volume estimation. Sensors 2016, 16, 972. [Google Scholar] [CrossRef] [PubMed]
  7. Feng, Q.; Cheng, W.; Zhou, J.; Wang, X. Design of structured-light vision system for tomato harvesting robot. Int. J. Agric. Biol. Eng. 2014, 7, 19–26. [Google Scholar] [CrossRef]
  8. Liu, H.; Lee, S.H.; Chahl, J.S. Registration of multispectral 3D points for plant inspection. Precis. Agric. 2018, 19, 513–536. [Google Scholar] [CrossRef]
  9. Liu, H.; Lee, S.H.; Chahl, J.S. A multispectral 3D vision system for invertebrate detection on crops. IEEE Sens. J. 2017, 17, 7502–7515. [Google Scholar] [CrossRef]
  10. Wahabzada, M.; Paulus, S.; Kersting, K.; Mahlein, A.K. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation. BMC Bioinform. 2015, 16, 1–11. [Google Scholar] [CrossRef] [PubMed]
  11. Chaudhury, A.; Barron, J.L. Machine Vision System for 3D Plant Phenotyping. IEEE/ACM Trans. Comput. Biol. Bioinf. 2018. [Google Scholar] [CrossRef] [PubMed]
  12. Barnea, E.; Mairon, R.; Ben-Shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. [Google Scholar] [CrossRef]
  13. Si, Y.; Liu, G.; Feng, J. Location of apples in trees using stereoscopic vision. Comput. Electron. Agric. 2015, 112, 68–74. [Google Scholar] [CrossRef]
  14. Ji, W.; Meng, X.; Qian, Z.; Xu, B.; Zhao, D. Branch localization method based on the skeleton feature extraction and stereo matching for apple harvesting robot. Int. J. Adv. Robot. Syst. 2017, 14, 1–9. [Google Scholar] [CrossRef]
  15. Gong, L.; Chen, R.; Zhao, Y.; Liu, C. Model-based in-situ measurement of pakchoi leaf area. Int. J. Agric. Biol. Eng. 2015, 8, 35–42. [Google Scholar] [CrossRef]
  16. Polder, G.; Hofstee, J.W. Phenotyping large tomato plants in the greenhouse using a 3D light-field camera. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2014, Montreal, QC, Canada, 13–16 July 2014; pp. 153–159. [Google Scholar]
  17. Li, J.; Tang, L. Crop recognition under weedy conditions based on 3D imaging for robotic weed control. J. Field Robot. 2018, 35, 596–611. [Google Scholar] [CrossRef]
  18. Andujar, D.; Ribeiro, A.; Fernandez-Quintanilla, C.; Dorado, J. Using depth cameras to extract structural parameters to assess the growth state and yield of cauliflower crops. Comput. Electron. Agric. 2016, 122, 67–73. [Google Scholar] [CrossRef]
  19. Avendano, J.; Ramos, P.J.; Prieto, F.A. A System for Classifying Vegetative Structures on Coffee Branches based on Videos Recorded in the Field by a Mobile Device. Expert Syst. Appl. 2017, 88, 178–192. [Google Scholar] [CrossRef]
  20. Nguyen, T.T.; Vandevoorde, K.; Wouters, N.; Kayacan, E.; De Baerdemaeker, J.G.; Saeys, W. Detection of red and bicoloured apples on tree with an RGB-D camera. Biosyst. Eng. 2016, 146, 33–44. [Google Scholar] [CrossRef]
  21. Jafari, A.; Bakhshipour, A. A novel algorithm to recognize and locate pomegranate on the tree for the harvesting robot using stereo vision system. In Proceedings of the Precision Agriculture 2011—Papers Presented at the 8th European Conference on Precision Agriculture 2011, Prague, Czech Republic, 11–14 July 2011; pp. 133–142. [Google Scholar]
  22. Wang, C.; Zou, X.; Tang, Y.; Luo, L.; Feng, W. Localisation of litchi in an unstructured environment using binocular stereo vision. Biosyst. Eng. 2016, 145, 39–51. [Google Scholar] [CrossRef]
  23. Bac, C.W.; Hemming, J.; Van Henten, E.J. Stem localization of sweet-pepper plants using the support wire as a visual cue. Comput. Electron. Agric. 2014, 105, 111–120. [Google Scholar] [CrossRef]
  24. Chen, Y.; Jin, X.; Tang, L.; Che, J.; Sun, Y.; Chen, J. Intra-row weed recognition using plant spacing information in stereo images. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2013, Kansas City, MO, USA, 21–24 July 2013; pp. 915–921. [Google Scholar]
  25. Vazquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  26. Mehta, S.S.; Ton, C.; Asundi, S.; Burks, T.F. Multiple camera fruit localization using a particle filter. Comput. Electron. Agric. 2017, 142, 139–154. [Google Scholar] [CrossRef]
  27. Fernandez, R.; Salinas, C.; Montes, H.; Sarria, J. Multisensory system for fruit harvesting robots. Experimental testing in natural scenarios and with different kinds of crops. Sensors 2014, 14, 23885–23904. [Google Scholar] [CrossRef] [PubMed]
  28. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; DeSouza, G.N. Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping. Sensors 2017, 17. [Google Scholar] [CrossRef] [PubMed]
  29. Moeckel, T.; Dayananda, S.; Nidamanuri, R.R.; Nautiyal, S.; Hanumaiah, N.; Buerkert, A.; Wachendorf, M. Estimation of Vegetable Crop Parameter by Multi-temporal UAV-Borne Images. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
  30. Karpina, M.; Jarząbek-Rychard, M.; Tymkw, P.; Borkowski, A. UAV-based automatic tree growth measurement for biomass estimation. In Proceedings of the 23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, Prague, Czech Republic, 12–19 July 2016; pp. 685–688. [Google Scholar]
  31. Lu, H.; Tang, L.; Whitham, S.A. Development of an automatic maize seedling phenotyping platfrom using 3D vision and industrial robot arm. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2015, New Orleans, LA, USA, 26–29 July 2015; pp. 4001–4013. [Google Scholar]
  32. Paulus, S.; Dupuis, J.; Mahlein, A.K.; Kuhlmann, H. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping. BMC Bioinform. 2013, 14. [Google Scholar] [CrossRef] [PubMed]
  33. Ni, Z.; Burks, T.F.; Lee, W.S. 3D reconstruction of small plant from multiple views. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2014, Montreal, QC, Canada, 13–16 July 2014; pp. 661–668. [Google Scholar]
  34. Yeh, Y.H.F.; Lai, T.C.; Liu, T.Y.; Liu, C.C.; Chung, W.C.; Lin, T.T. An automated growth measurement system for leafy vegetables. In Proceedings of the 4th International Workshop on Computer Image Analysis in Agriculture, Valencia, Spain, 8–12 July 2012; pp. 43–50. [Google Scholar]
  35. Zhang, Y.; Teng, P.; Shimizu, Y.; Hosoi, F.; Omasa, K. Estimating 3D leaf and stem shape of nursery paprika plants by a novel multi-camera photography system. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  36. Rose, J.C.; Kicherer, A.; Wieland, M.; Klingbeil, L.; Topfer, R.; Kuhlmann, H. Towards automated large-scale 3D phenotyping of vineyards under field conditions. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  37. Wen, W.; Guo, X.; Wang, Y.; Zhao, C.; Liao, W. Constructing a Three-Dimensional Resource Database of Plants Using Measured in situ Morphological Data. Appl. Eng. Agric. 2017, 33, 747–756. [Google Scholar] [CrossRef]
  38. Hui, F.; Zhu, J.; Hu, P.; Meng, L.; Zhu, B.; Guo, Y.; Li, B.; Ma, Y. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [Google Scholar] [CrossRef] [PubMed]
  39. Li, J.; Tang, L. Developing a low-cost 3D plant morphological traits characterization system. Comput. Electron. Agric. 2017, 143, 1–13. [Google Scholar] [CrossRef]
  40. An, N.; Welch, S.M.; Markelz, R.J.C.; Baker, R.L.; Palmer, C.M.; Ta, J.; Maloof, J.N.; Weinig, C. Quantifying time-series of leaf morphology using 2D and 3D photogrammetry methods for high-throughput plant phenotyping. Comput. Electron. Agric. 2017, 135, 222–232. [Google Scholar] [CrossRef]
  41. Bao, Y.; Tang, L. Field-based Robotic Phenotyping for Sorghum Biomass Yield Component Traits Characterization Using Stereo Vision. In Proceedings of the 5th IFAC Conference on Sensing, Control and Automation Technologies for Agriculture, Seattle, WA, USA, 14–17 August 2016; pp. 265–270. [Google Scholar]
  42. Bao, Y.; Tang, L.; Schnable, P.S.; Salas-Fernandez, M.G. Infield Biomass Sorghum Yield Component Traits Extraction Pipeline Using Stereo Vision. In Proceedings of the 2016 ASABE Annual International Meeting, Orlando, FL, USA, 17–20 July 2016. [Google Scholar]
  43. Behmann, J.; Mahlein, A.K.; Paulus, S.; Dupuis, J.; Kuhlmann, H.; Oerke, E.C.; Plumer, L. Generation and application of hyperspectral 3D plant models: methods and challenges. Mach. Vis. Appl. 2016, 27, 611–624. [Google Scholar] [CrossRef]
  44. Golbach, F.; Kootstra, G.; Damjanovic, S.; Otten, G.; van de Zedde, R. Validation of plant part measurements using a 3D reconstruction method suitable for high-throughput seedling phenotyping. Mach. Vis. Appl. 2016, 27, 663–680. [Google Scholar] [CrossRef]
  45. Santos, T.T.; Koenigkan, L.V.; Barbedo, J.G.A.; Rodrigues, G.C. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera. In Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 247–263. [Google Scholar]
  46. Rist, F.; Herzog, K.; Mack, J.; Richter, R.; Steinhage, V.; Topfer, R. High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation. Sensors 2018, 18. [Google Scholar] [CrossRef] [PubMed]
  47. Moriondo, M.; Leolini, L.; Stagliano, N.; Argenti, G.; Trombi, G.; Brilli, L.; Dibari, C.; Leolini, C.; Bindi, M. Use of digital images to disclose canopy architecture in olive tree. Sci. Hortic. 2016, 209, 1–13. [Google Scholar] [CrossRef]
  48. Duan, T.; Chapman, S.C.; Holland, E.; Rebetzke, G.J.; Guo, Y.; Zheng, B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J. Exp. Bot. 2016, 67, 4523–4534. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Chaivivatrakul, S.; Tang, L.; Dailey, M.N.; Nakarmi, A.D. Automatic morphological trait characterization for corn plants via 3D holographic reconstruction. Comput. Electron. Agric. 2014, 109, 109–123. [Google Scholar] [CrossRef]
  50. Rose, J.C.; Paulus, S.; Kuhlmann, H. Accuracy analysis of a multi-view stereo approach for phenotyping of tomato plants at the organ level. Sensors 2015, 15, 9651–9665. [Google Scholar] [CrossRef] [PubMed]
  51. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  52. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Rob. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  53. Hirschmuller, H. Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information. In Proceedings of the Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 807–814. [Google Scholar]
  54. Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP variants on real-world data sets. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
  55. McLachlan, G.; Peel, D. Finite Mixture Models; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2000. [Google Scholar]
  56. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D Point cloud based object maps for household environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 927–941. [Google Scholar]
  57. Abbeloos, W. Real-Time Stereo Vision. Ph.D. Thesis, Karel de Grote-Hogeschool University College (KDG IWT), Antwerp, Belgium, 2010. [Google Scholar]
  58. Kumar, A.; Jain, P.K.; Pathak, P.M. Curve reconstruction of digitized surface using K-means algorithm. In Proceedings of the 24th DAAAM International Symposium on Intelligent Manufacturing and Automation, Univ Zadar, Zadar, Croatia, 23–26 October 2013; pp. 544–549. [Google Scholar]
  59. Bezdec, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Plenum Press: New York, NY, USA, 1981. [Google Scholar]
  60. Teimouri, N.; Omid, M.; Mollazade, K.; Rajabipour, A. A novel artificial neural networks assisted segmentation algorithm for discriminating almond nut and shell from background and shadow. Comput. Electron. Agric. 2014, 105, 34–43. [Google Scholar] [CrossRef]
Figure 1. (a) represents the shooting height is the lowest, the amount of grass is larger, and the exposure intensity is moderate; (b) represents the shooting height is moderate, the amount of grass is the largest, and the exposure intensity is the largest; (c) represents the shooting height is the highest, the amount of grass is small, and the exposure intensity is normal; (d) represents the binocular camera (VR Camera).
Figure 1. (a) represents the shooting height is the lowest, the amount of grass is larger, and the exposure intensity is moderate; (b) represents the shooting height is moderate, the amount of grass is the largest, and the exposure intensity is the largest; (c) represents the shooting height is the highest, the amount of grass is small, and the exposure intensity is normal; (d) represents the binocular camera (VR Camera).
Sensors 19 01132 g001
Figure 2. (ac) represent the checkerboard calibration board images.
Figure 2. (ac) represent the checkerboard calibration board images.
Sensors 19 01132 g002
Figure 3. (ac) represent the original left RGB image, (df) represent the corresponding rectified RGB images.
Figure 3. (ac) represent the original left RGB image, (df) represent the corresponding rectified RGB images.
Sensors 19 01132 g003aSensors 19 01132 g003b
Figure 4. (ac) represent disparity maps obtained by using the SGM algorithm, (df) represent disparity maps obtained by using the SAD algorithm, (gi) represent disparity maps obtained using the SSD algorithm.
Figure 4. (ac) represent disparity maps obtained by using the SGM algorithm, (df) represent disparity maps obtained by using the SAD algorithm, (gi) represent disparity maps obtained using the SSD algorithm.
Sensors 19 01132 g004
Figure 5. (a) represents SAD values when the stereo matching window size is 55×55 pixel, maximum disparity is 130 pixel at point (400,400) of images in Figure 3d–f; (b) represents SAD values when the stereo matching window size is 55×55 pixel, maximum disparity is 130 pixel at point (400,400) of images in Figure 3d–f.
Figure 5. (a) represents SAD values when the stereo matching window size is 55×55 pixel, maximum disparity is 130 pixel at point (400,400) of images in Figure 3d–f; (b) represents SAD values when the stereo matching window size is 55×55 pixel, maximum disparity is 130 pixel at point (400,400) of images in Figure 3d–f.
Sensors 19 01132 g005
Figure 6. (ac) represent the original 3D point cloud, (df) represent the 3D point cloud after invalid point removal of Figure 6a–c, (gi) represent the top view of Figure 6d–f, (j–l) represent the sparse point cloud after down-sampling of Figure 6d–f.
Figure 6. (ac) represent the original 3D point cloud, (df) represent the 3D point cloud after invalid point removal of Figure 6a–c, (gi) represent the top view of Figure 6d–f, (j–l) represent the sparse point cloud after down-sampling of Figure 6d–f.
Sensors 19 01132 g006aSensors 19 01132 g006b
Figure 7. (ac) represent broccoli seedling clustering results by the GMM algorithm, (df) represent broccoli seedling clustering results by the K-means algorithm, (gi) represent broccoli seedling clustering results by the Fuzzy c-means algorithm, (jl) represent outlier filtering results of Figure 7a–c by the KNN algorithm.
Figure 7. (ac) represent broccoli seedling clustering results by the GMM algorithm, (df) represent broccoli seedling clustering results by the K-means algorithm, (gi) represent broccoli seedling clustering results by the Fuzzy c-means algorithm, (jl) represent outlier filtering results of Figure 7a–c by the KNN algorithm.
Sensors 19 01132 g007aSensors 19 01132 g007b
Figure 8.   π k of broccoli seedling component.
Figure 8.   π k of broccoli seedling component.
Sensors 19 01132 g008
Figure 9. (ac) Manually segmented images. (df) show cropped binary images of (ac), (gi) show top views of broccoli seeding points obtained by the GMM algorithm, (jl) are binary images of (gi).
Figure 9. (ac) Manually segmented images. (df) show cropped binary images of (ac), (gi) show top views of broccoli seeding points obtained by the GMM algorithm, (jl) are binary images of (gi).
Sensors 19 01132 g009aSensors 19 01132 g009b
Figure 10. (ac) represent envelope boxes for practical measurement, (df) represent envelope boxes acquired theoretically.
Figure 10. (ac) represent envelope boxes for practical measurement, (df) represent envelope boxes acquired theoretically.
Sensors 19 01132 g010aSensors 19 01132 g010b
Table 1. Matching window size, maximum disparity and matching time of SAD, SSD, and SGM.
Table 1. Matching window size, maximum disparity and matching time of SAD, SSD, and SGM.
ImageSADSSDSGM
Matching Window Size (Pixel)Maximum Disparity (Pixel)Matching Time (ms)Matching Window Size (Pixel)Maximum Disparity (Pixel)Matching Time (ms)Matching Window Size (Pixel)Maximum Disparity (pixel)Matching Time (ms)
a55×55130159855×55130161015×15128142
b55×55120147255×55120150215×15128135
c55×55110135655×55110138315×15128138
Table 2. Point number of point cloud.
Table 2. Point number of point cloud.
ImagePoint Number of Original Point CloudPoint Number of Point Cloud after Invalid Points RemovalPoint Number of Sparse Point Cloud
a432,677296,0534096
b432,677277,8584096
c432,677340,9744096
Table 3. Running time of three cluster.
Table 3. Running time of three cluster.
ImageGMM
(ms)
K-means
(ms)
Fuzzy c-means
(ms)
a516171
b527176
c4911172
Table 4. Manual area and theoretical area.
Table 4. Manual area and theoretical area.
ImageArea of Broccoli Seeding Obtained Manually (Pixel)Area of Broccoli Seeding Obtained Theoretically (Pixel)Intersection Area of Broccoli Seeding Obtained Manually and Theoretically (Pixel)Sensitivity
a1.08 × 1059.76 × 1048.48 × 10486.91%
b7.00 × 1046.48 × 1045.74 × 10482.07%
c3.79 × 1043.73 × 1043.36 × 10488.75%
Table 5. Measured and theoretical envelope box volumes.
Table 5. Measured and theoretical envelope box volumes.
PlantsMeasuredTheoretical(Theoretical Volume)/(Measured Volume)
Length (mm)Width (mm)Height (mm)Volume (mm3)Length (mm)Width (mm)Height (mm)Volume (mm3)
a263.68240.60151.489.61 × 106264.16234.12153.989.52 × 10699.09%
b307.86306.12230.462.17 × 107318.46305.63186.581.82 × 10783.61%
c316.68245.24155.221.21 × 107319.66252.43155.781.26 × 107104.28%

Share and Cite

MDPI and ACS Style

Ge, L.; Yang, Z.; Sun, Z.; Zhang, G.; Zhang, M.; Zhang, K.; Zhang, C.; Tan, Y.; Li, W. A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. Sensors 2019, 19, 1132. https://doi.org/10.3390/s19051132

AMA Style

Ge L, Yang Z, Sun Z, Zhang G, Zhang M, Zhang K, Zhang C, Tan Y, Li W. A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. Sensors. 2019; 19(5):1132. https://doi.org/10.3390/s19051132

Chicago/Turabian Style

Ge, Luzhen, Zhilun Yang, Zhe Sun, Gan Zhang, Ming Zhang, Kaifei Zhang, Chunlong Zhang, Yuzhi Tan, and Wei Li. 2019. "A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model" Sensors 19, no. 5: 1132. https://doi.org/10.3390/s19051132

APA Style

Ge, L., Yang, Z., Sun, Z., Zhang, G., Zhang, M., Zhang, K., Zhang, C., Tan, Y., & Li, W. (2019). A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. Sensors, 19(5), 1132. https://doi.org/10.3390/s19051132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop