Next Article in Journal
Approaches towards Land Valuation and Land Pricing under the Influence of Geo-Climate Change
Next Article in Special Issue
Estimation of Cultivated Land Quality Based on Soil Hyperspectral Data
Previous Article in Journal
Feminization of Agriculture: Do Female Farmers Have Higher Expectations for the Value of Their Farmland?—Empirical Evidence from China
Previous Article in Special Issue
A Handheld Grassland Vegetation Monitoring System Based on Multispectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying

1
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
2
Sino-USA Pesticide Application Technology Cooperative Laboratory, Nanjing 210014, China
3
College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
4
Oil Crops Research Institute, Chinese Academy of Agricultural Sciences, Wuhan 430062, China
5
Key Laboratory of Aviation Plant Protection, Ministry of Agriculture and Rural Affairs, Anyang 455000, China
*
Authors to whom correspondence should be addressed.
Agriculture 2022, 12(1), 62; https://doi.org/10.3390/agriculture12010062
Submission received: 18 November 2021 / Revised: 23 December 2021 / Accepted: 30 December 2021 / Published: 5 January 2022
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)

Abstract

:
To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. The unmanned agricultural aerial system (UAAS) has developed rapidly in China. It has been used on supplementary pollination and aerial spraying during the hybrid OSR seed production. This study developed a new method to rapidly recognize the male OSR plants and extract the row center line for supporting the UAAS navigation. A male OSR plant recognition model was constructed based on the convolutional neural network (CNN). The sequence images of male OSR plants were extracted, the feature regions and points were obtained from the images through morphological and boundary process methods and horizontal segmentation, respectively. The male OSR plant image recognition accuracies of different CNN structures and segmentation sizes were discussed. The male OSR plant row center lines were fitted using the least-squares method (LSM) and Hough transform. The results showed that the segmentation algorithm could segment the male OSR plants from the complex background. The highest average recognition accuracy was 93.54%, and the minimum loss function value was 0.2059 with three convolutional layers, one fully connected layer, and a segmentation size of 40 pix × 40 pix. The LSM is better for center line fitting. The average recognition model accuracies of original input images were 98% and 94%, and the average root mean square errors (RMSE) of angle were 3.22° and 1.36° under cloudy day and sunny day lighting conditions, respectively. The results demonstrate the potential of using digital imaging technology to recognize the male OSR plant row for UAAS visual navigation on the applications of hybrid OSR supplementary pollination and aerial spraying, which would be a meaningful supplement in precision agriculture.

1. Introduction

The oilseed rape (OSR, Brassica napus) is the third most important oil crop in the world [1,2,3,4,5]. Its planting areas are expanding very fast, especially the hybrid OSR [6]. The seed production is a guarantee for its large-scale planting of OSR. Hybrid vigor, that is, cross-pollination generally, is an important means for high yield of the OSR seed because, in this way, genetic variability is ensured. Therefore, spreading the pollen from the stamens of the male OSR plants to the pistils of the female OSR plants and the stamen sterility of the female OSR plants are necessary during the OSR seed production [7,8,9]. In agronomy, the anthericide is sprayed on the female OSR plants to make them sterile, and the pollen spreads from the male OSR plants to the female ones to achieve cross-pollination during the flowering period, forming siliques usually. In order to improve the level of agricultural production mechanization, more and more new intelligent agricultural machinery has been used in agriculture. The unmanned agricultural aerial system (UAAS), or called unmanned aerial vehicle (UAV) or unmanned aircraft system (UAS), is a typical representative. Carried different airborne mission equipment, the UAAS can realize remote sensing [10,11], aerial spraying [12,13,14], particle fertilizing [15], aerial seeding [16], etc.
The UAAS has also been applied on the OSR, including sclerotinia sclerotiorum control and protection, foliar fertilizer spraying, rapeseed aerial seeding, and supplementary pollination [9]. With the guidance of Global Navigation Satellite System (GNSS) such as BeiDou Navigation Satellite System (BDS), Global Positioning System (GPS), the UAAS can fly fully autonomously according to the planned routes with specified speeds and heights and complete the agricultural operations meanwhile efficiently [17,18]. But there is a problem that, during the UAAS autonomous flight, the planned routes are mostly straight, U-shaped, or some other regular flight routes improved on straight ones [19,20,21,22,23]. Actually, due to the seeding methods or the irregularity of the fields, the OSR planting rows may not be straight, so the route planning method only based on GNSS signals should be improved for the UAAS application on OSR. In addition, the GNSS signals may be weak or unstable in remote areas, even with no signals. These factors are probable to cause the UAAS not to fly strictly and accurately over the male OSR plant rows. It is necessary to introduce other methods to improve and supplement the navigation for UAAS applications on OSR.
Navigation of UAVs using vision systems has been an important field of research during recent years, and image-based navigation technologies combined with some algorithms have been widely used in the field of agricultural machinery [24,25,26]. Based on the analysis of the differences between the targets and the background, Peng et al. [27] proposed an image-based inter-row path recognition method in the dense jujube orchard for management and harvest. The results showed that the algorithm accuracy could exceed 83.4% without noises. In order to solve the limitation of GPS navigation in a bad environment, Liu et al. [28] used the UAV-based computer vision method to develop a positioning system. With the system, they realized providing position information for an agricultural airboat to engage in autonomous fertilizing and herbicide application in paddy fields. The navigation experiment results showed that the lateral RMSEs were 0.17 m, 0.10 m, and 0.11 m in three predefined paths, respectively, which were better than those of differential-GPS. Aiming at the problem of no GPS signal indoor, Maravall et al. [29] utilized the onboard camera to capture the environmental images, developed a vision-based controller for the UAV autonomous navigation. In the controller, a topological map based on was to have an efficient classification of the images of the landmark captured by the onboard camera, the control commands were generated in real-time. Tang et al. [30] developed a control method with image-based visual servo (IBVS) for a four-rotor UAV. The four-rotor UAV flew through a window and landed on the designated pad autonomously by the control commands deduced directly from captured image features. Based on both machine learning and standard machine vision algorithms, Opromolla et al. [31] presented an original, visual-based approach to sense and avoid small UAS. The results proved that the proposed approach allowed the target to be declared at a relatively long range and tracked with a line of sight rate accuracy of tenths of degrees per second.
In summary, the above research conclusions indicate that the methods based on image recognition technology can be used for autonomous flight navigation of UAAS, and high accuracy can be achieved if the algorithm is appropriate. There is still a lack of research on OSR plant recognition for UAAS navigation. In this study, the authors attempted to identify the OSR plant row based on the CNN and provide UAAS with a navigation method based on image recognition for its flying above the OSR plant row precisely under the irregular field conditions or in environments where the GNSS signal is weak or unstable. This could support UAAS application on the supplementary pollination and aerial spraying of OSR seed production.

2. Materials and Methods

2.1. OSR Plant Image Collecting Site, OSR Characters

The OSR plant images were collected in the OSR seed production base of the Oil Crops Research Institute, Chinese Academy of Agricultural Sciences in Xiangyang City, Hubei Province, China. The images were collected from 8:00 to 12:00 each day from March 1st to 10th during the OSR plant flowering period. The produced seed variety is Dadi 199 (Registration Number: GPD Rape (2017) 420,056, Ministry of Agriculture and Rural Affairs of the People’s Republic of China), which is a three-line hybrid combination of Polyma cytoplasmic male sterile line 11CA (male OSR plant) and restorer line R11 (female OSR plant) [32]. The OSR plant was sowed in the field by manual transplanting. The main characters of the OSR plant and the weather conditions are shown in Table 1.

2.2. Image Collecting UAV and Method

The DJI Phantom 4 (Shen Zhen, China, DJI Co., Ltd., China, as shown in Figure 1) was used to collect OSR plant images. The DJI Phantom 4 is a four-rotor UAV with GPS/GLONASS. Through the app DJI GO 4 (Shen Zhen, China, DJI Co., Ltd., Shenzhen, China), the operator can observe the scene captured by the UAV through the first view. The main technical parameters are shown in Table 2.
In this study, we mainly collected male OSR plant images, and the subsequent image recognition analysis also focused on male OSR plants. Two image collecting methods, static manual holding photography (SMHP, as shown in Figure 1) and low altitude hovering photography (LAHP, as shown in Figure 1), were adopted to collect different sizes of OSR plant images. In the SMHP method, the image collecting UAV was raised up manually above the male OSR plant rows’ center line and held stably about 2 m from the OSR plant canopy. In the LAHP method, the image collecting UAV hovered above the center line of the male OSR plant rows about 3 m from the canopy. In both methods, the camera forward view was adjusted consistently with the OSR plant row direction. The angle between the lens and the vertical direction was set to 60°, ensuring that enough plant rows appeared in the visual field. Through the remote control and mobile phone with DJI GO 4, the operator fine-tuned the location of the UAV and the lens angle to achieve high-quality images, triggering the camera capturing images. The images were stored in JPEG format on the SD memory card.

2.3. Image Processing Using CNN

The operation of CNN, which is a type of highly parallelized method, is based on the principle of the forward and backward propagation algorithm that it can automatically learn to extract distributed features of input data in convolutional layers, and the deep features generated by CNN are more efficient and robust than features handcrafted [33]. A typical CNN architecture usually consists of convolutional (Conv) layers, activation layer, pooling (POOL) layer, and fully connected (FC) layer [34], as shown in Figure 2. The Conv layers are made up of a series of convolution kernels, each of which obtains certain features from the input images, starting from basic features such as edges and shapes at initial layers and becoming more complex and specific features at the last layers [35]. The activation layer behind Conv or FC layers is a non-linear operation of CNN models, which learns the non-linear representation of the output volume of the previous Conv or FC layer by a non-linear activation function [36]. The POOL layer can reduce the spatial dimensions of the extracted feature maps and the number of network parameters by some non-linear numerical operations [36]. The output of the FC layers represents the output of CNN models, and the number of output nodes depends on the number of labeled classes [33].

2.3.1. Male OSR Plant Row Recognition Model Construction

The 2/3 area of the image’s proximal visual field was intercepted and then resized to 720 pix × 1720 pix after normalization. The rectangular segmentations of the images were conducted for three size groups, which were 40 pix × 40 pix, 20 pix × 20 pix, and 10 pix × 10 pix, respectively. The segmented images were grayed for image sample training in order to reduce the training time. The following principles were used to define the samples to create the image dataset based on the segmented images: (1) the images containing only male OSR plant rows were divided into male image sets; (2) the images containing only female OSR plant rows or soil areas were divided into female image set; (3) the images both containing male and female OSR plant rows were divided into male image set if the male plant area was larger than the female one, otherwise into female image set.
Two-dimensional (2-D) CNN model with 2-D convolution filters was used to extract the features of the images by computing the response of the 2-D learning filter for the input image. The size of the input image is just N1 × N1, and there are m1 2-D convolution kernels of the size k1 × k1 in the first 2-D Conv layer; thereby, the first 2-D Conv layer will generate m1 feature maps of the size (N1k1 + 1) × (N1k1 + 1) (depending on the size of the 2-D convolution kernels) by 2-D convolution operation. Moreover, each feature map is obtained by taking the dot product between the weight matrix ω and the local area position (x, y), and the value of a neuron V i , j x , y at position (x, y) on the jth feature map in the ith layer can be calculated by Equation (1) [34].
V i , j x , y = φ ( b i j + m = 0 N k = 0 K i 1 q = 0 P i 1 ω i j m k p V ( i 1 ) m ( x + k ) ( y + q ) )
where φ denotes the activation function of the ith layer, bij is an additive bias of jth feature map at the ith layer, m indexes the connection between the feature map in the (i − 1) th layer and the current (jth) feature map, Ki and Pi are the height and width of the 2-D convolution kernel respectively, ω i j m k p is a weight for input V ( i 1 ) m ( x + k ) ( y + q ) with an offset of (k, p) in 2-D convolution kernel.
In this study, the author designed CNN structures including one Conv layer with one FC layer (C1 + FC1) net, two Conv layers with one FC layer (C2 + FC1) net, three Conv layers with one FC layer (C3 + FC1) net, four Conv layers with one FC layer (C4 + FC1) net and five Conv layers with one FC layer (C5 + FC1) net, to filter out the appropriate Conv layer setting parameters. Then filter out the optimized FC layer setting parameters. The size of all convolution kernels in this article is 3 × 3.
The average recognition model accuracy (ARMA) is calculated by Equation (2).
A M R A = m n × 100 %
where m is the number of recognized male OSR plants in an image, n is the total number of the male OSR plants in an image.

2.3.2. Male OSR Plant Row Feature Point Extraction Method

The normalization images were divided into blocks with rows and columns. The sequence images were obtained with a size of 18 pix × 30 pix by the trained CNN model. The indexes of the male plant region were marked 1; other ones were marked 0. Expand each pix area to the size by m × m, and obtain the index images. Denoising and smoothing the edge of the index images to eliminate the morphological influence and boundary loss of plant row, the male OSR plant row region, and feature points of each normalization image were fully retained, preparing for the subsequent center line extraction of male OSR row.

2.3.3. Male OSR Plant Row Center Line Extraction and Fitting Method

Based on the above feature region and points of male OSR plant row, the male OSR plant center line was extracted and recognized as follows: (1) scan the index-image line by line from left to right; (2) for nth line, when the index value changed from 0 to 1, record the current pixel region coordinate as L (xn1, yn1), when the index value changed from 1 to 0, record the current pixel region coordinate as R (xn2, yn2); (3) the center point coordinate Cn (xn, yn) could be calculated by Equations (2)–(5); (4) repeat steps (1) to (3), scan 10 lines per index-image; (5) fit the center point of each line to achieve the center line of the male OSR plant row. Figure 3 shows the method workflow of this study.
C n ( x n , y n ) = [ L ( x n 1 , y n 1 ) + R ( x n 2 , y n 2 ) ] 2
x n = ( x n 1 + x n 2 ) 2
y n = ( y n 1 + y n 2 ) 2
The male OSR plant row center lines were fitted using the least-squares method (LSM) [37,38] and Hough transform [39]. Based on the artificially calibrated center line results, the accuracies of the center lines fitted by the two methods were compared and analyzed [40,41].

3. Results

Figure 4 shows the images captured by UAV under cloudy and sunny day conditions with different lighting intensities, respectively. It could be seen that there was no obvious difference in texture and morphology between the male and female plants, while the color feature difference was significant due to the different light reflectance caused by the different intensities of sunlight.
Figure 5 shows the sequence image, denoised image, and feature points image of the male OSR plants. According to the images obtained by the processing methods above, the recognition accuracies and the center line fitting are analyzed as follows.

3.1. Effect of Different Segmented Training Image Size on CNN Model Accuracy

The segmented image size affects the accuracy of the CNN recognition model and the quality of the sequence images. In this study, the images were conducted for three size groups (40 pix × 40 pix, 20 pix × 20 pix, and 10 pix × 10 pix) for CNN model construction. The results showed that the ARMA decreased and the loss function value (LFV) increased as the segmented image size decreased while each image’s training recognition time (TRT) increased. The detailed training results are shown in Table 3.
From Table 3, it could be seen that the ARMA was highest as 93.54% when images were segmented by 40 pix × 40 pix and the LFV was minimum as 0.2059, while the ARMA was lowest as 85.37% and the LFV was maximum when the images segmented by 10 pix × 10 pix. In terms of TRT, a longer time was needed to recognize the high pixel image. The average TRT was 0.06 s for each 40 pix × 40 pix image. Considering that the ARMA would directly affect the reconstruction of the center line of male OSR plant row, in this study, images segmented by 40 pix × 40 pix were used for training and further analysis, although more recognition time would be required.

3.2. Recognition Accuracies of Different CNN Structures

Different combinations of Conv layers and FC layers were set for the optimized CNN structure. The results showed that the ARMAs were 90.19%, 93.36%, 93.54%, 93.37%, and 93.19% for C1 + FC1 structure, C2 + FC1 structure, C3 + FC1 structure, C4 + FC1 structure and C5 + FC1 structure, respectively, as shown in Figure 6. It can be seen that the recognition accuracies all exceeded 90% under different Conv layers with one FC layer, in which the highest one was 93.54% with C3 + FC1 structure. Therefore, the appropriate FC layer structure was to be filtered out under the condition of three Conv layers (C3).
The results showed that the ARMAs were 93.54%, 93.80%, 93.71%, 92.11%, and 92.14% for C3 + FC1 structure, C3 + FC2 structure, C3 + FC3 structure, C3 + FC4 structure and C3 + FC5 structure, respectively, as shown in Figure 7. Moreover, it can be seen that the recognition accuracies all exceeded 90% under different FC layers with three Conv layers (C3). The highest accuracy was 93.80% with C3 + FC2 structure, and then 93.71% with C3 + FC3, 93.54% with C3 + FC1, which were the combinations with higher recognition accuracies.

3.3. Robustness Analysis of Different CNN Structures

The quality of a CNN model depends not only on the recognition accuracy but also on its robustness. The LFV was used to evaluate the CNN model robustness of different structures. The LFVs of different Conv layers with one FC layer and different FC layers with three Conv layers were shown in Table 4 and Table 5.
The minimum LFV was 0.2059 with C3 + FC1 CNN structure. The average recognition model accuracy of the C3 + FC2 CNN structure was 93.80%, which was the highest among all the CNN structures, but its LFV was 0.2367. The LFV increased by 14.96%, while the average recognition model accuracy only increased 0.28% comparing C3 + FC2 CNN structure with C3 + FC1 CNN structure. Because of the unobvious texture and color features between the OSR male and female plants when manually selecting the feature points, especially during the flowering period, the CNN model robustness was prioritized in this study to accurately obtain the center line of OSR male plant rows. Therefore, the C3 + FC1 CNN structure with minimum LFV was considered optimal and adopted for center line extraction.

3.4. Male OSR Plant Row Center Line Fitting

The line fitting methods include LSM, Hough transform, vertical projection method, et al. [37,39,41], the LSM and Hough transform were used in this study to fit the center line of male OSR plant row. In order to judge the accuracy of the extracted line, the artificially calibrated male OSR plant row center line was used as the evaluation standard to verify the algorithm in this study. The angle between the artificially calibrated line and the fitting line is defined as the error angle. When the average root mean square error (RMSE) of angle is greater than 5°, the center extraction is considered invalid [42]. Two hundred index images containing the male OSR plant rows were selected randomly as the input images to analyze the accuracy of center line fitting by the LSM and Hough transform methods. The results indicated that the LSM was better than the Hough transform. The average accuracy was 97.50% by LSM, more accurate than that of 85.50% by Hough transform, and the average single image time-consume was 0.20 s by LSM less than 0.26 s by Hough transform (see Table 6). The LSM method was preferred for center line fitting.
Illumination is an important factor that affects the accuracy of visual navigation. When the UAAS is flying above the field, its illumination is uncontrollable and may be variable. Therefore, the extracted center line for vision-based navigation should be adapted to different lighting conditions. In order to verify the applicability of the proposed method in different operating environments, fifty images collected under each lighting condition (cloudy day and sunny day) were used as input images to recognize the male OSR plants, extract and fit the center line by the CNN model and LSM further. The results are shown in Table 7. For the images collected under cloudy weather conditions, the ARMA researched 98%, the total processing duration time of a single image was 1.18 s, and the average RMSE of angle was 3.22°. For the images collected under sunny weather conditions, the ARMA was 94%, the total processing duration time of a single image was 1.72 s, and the average RMSE of angle was 1.36°. It can be concluded that the algorithm in this study has an average recognition accuracy of over 94% for male OSR plants, and the average angle RMSE is within 3.22°. Figure 8 shows the extraction and fitting results of the male OSR plant row center line under different lighting conditions.

4. Discussion

The segmented image size affected the recognition performance of the CNN model. The ARMA decreased, the LFV increased, and the average TRT increased as the segmented image size decreased. When the image was segmented by 40 pix × 40 pix, the ARMA was highest as 93.54%, the LFV was minimum as 0.21, and the average TRT was 0.06 s for each image. This is consistent as expected, the higher the resolution, the more feature information it contains, and more sufficiently the model learns the underlying information of the image. To the CNN structure effects on the recognition accuracy, the recognition accuracies first ascended then declined as the structure changed from one Conv layer to five Conv layers with one FC layer. This may be because that the feature region and points could not be extracted fully with fewer Conv layers, more and more features would be extracted, such as texture information as Conv layer number increased, while some noise information such as the soil feature would be over-extracted as the Conv layer number continued to increase, leading to the low recognition accuracy. The highest CNN mode recognition accuracy was 93.54% when the CNN structure was with three Conv layers under one FC layer. This accuracy is higher than the result of Zhang et al. [38]. The smaller LFV indicates the stronger model robustness. The LFVs increased as the FC layer number increased with three Conv layers, and the minimum LFV was 0.2059 when the CNN structure was with one FC layer under three Conv layers. Therefore, the CNN structure with three Conv layers and one FC layer (C3 + FC1) filtered out to be optimal. The LSM was used to fit the center line of male OSR rows. The results showed that the accuracy of recognition was all over 94%, and the average angle RMSEs was within 3.22°. The average angle RMSE is larger than the results of Zhang et al. [38]. The reason may be that in reference [38], the background is water surface which is not as complex as that of the OSR plant field in this study. Comparing the results of image recognition and center line fitting under cloudy and sunny day conditions, it could be found that the recognition accuracy of images collected under cloudy days was higher, the recognition and center fitting time was shorter, and the angle error was larger. For the images collected under a sunny day, the recognition accuracy was lower than that under a cloudy day, a single image recognition and center line fitting needed a longer time, but the angle error was smaller. The reason may be that the strong light made it more difficult to distinguish the color boundary of the male and female OSR plant rows while the color and texture contrast was stronger when the light was relatively weak, and the images collected under high brightness contained more plant feature information. This is similar to the result of Zeng et al. [40].

5. Conclusions

Finding a new navigation method is important for UAAS application on the OSR, especially the hybrid OSR seed production. Meanwhile, it is also a supplement to the current navigation methods mainly based on the GNSS system. Since the male OSR plant and the female OSR plant have very little differences in morphology, color, texture, etc., it is more difficult to distinguish them visually, especially during the flowering period. This study proposed an image recognition method based on CNN to recognize the male OSR plants. On this basis, feature points of the male OSR plant rows were extracted, and center lines were fitted further to provide a reference for UAAS navigation.
Studies have shown that the recognition model based on the CNN could accurately recognize the male OSR plants. Different CNN structures and different image segmentation size both impact the recognition accuracies. The highest recognition accuracy reached 93.54% with C3 + FC1 CNN structure when the image segmentation size was 40 pix × 40 pix. The LSM and Hough transform methods were used to fit the male OSR plant row center lines. The LSM method had the advantages of fitting accuracy and average single image time-consume. Illumination conditions also impact the recognition of the male OSR plants and the center line fitting. The original OSR plant images collected under cloudy and sunny weather conditions were analyzed through the constructed model. The male OSR plant row center lines were fitted, which showed that cloudy weather collected image recognition accuracy was higher. The average single image time-consume was shorter, while the center line fitting average RMSE of angle was larger, more than twice that under sunny weather conditions.

6. Future Work

This study aimed to explore a new navigation method for UAAS application on hybrid OSR seed production. In this study, we developed a method of visual recognition of male OSR plants based on CNN. The results showed very high accuracies of image recognition and center line fitting, which proved that the method in this study was feasible. There are still some further studies needed to be carried out next.
The plant morphological characteristic is one of the most critical factors affecting the quality of image recognition, and the prominent morphological characteristics are the prerequisites for accurate and successful recognitions. The method developed in this study has only been verified on a variety of male OSR plants [32]. Further verifications are needed on more varieties of male OSR plants with different plant heights, leaf shape, color, texture, etc.
In addition, illumination is an important factor that affects image recognition, which was proved in this study. The OSR plant images under cloudy and sunny day conditions were collected as the input images in this study. The quantified light intensity at different levels under different weather conditions may also be an important factor affecting image recognition accuracy, image processing time, and navigation center line fitting. It is worthy of further studies.
Finally, as mentioned above, the purpose of this study was to explore a new visual navigation method for UAAS. Some other image processing methods, such as artificial neural networks [29], YOLOv3 [38], etc., application possibility in navigation is also worth studying. Continuous and in-depth studies are necessary to develop the route planning method and system based on the center line.

Author Contributions

Conceptualization, Z.S., S.Z. and X.G.; methodology, Z.S., X.G., Y.X. and S.Z.; software, X.G. and S.Z.; validation, X.G., X.C., S.Z., Q.H. and W.W.; formal analysis, Z.S., S.Z. and X.X.; investigation, X.C., Q.H. and W.W.; resources, S.Z. and Y.X.; data curation, Z.S., S.Z. and Y.X.; writing—original draft preparation, Z.S., X.G. and S.Z.; writing—review and editing, Z.S. and S.Z.; visualization, Z.S., X.G. and S.Z.; supervision, X.X.; project administration, X.X.; funding acquisition, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China Agriculture Research System of MOF and MARA (Grant NO. CARS-12), the National Key Research and Development Program of China (Grant No. 2017YFD0701000), the Agricultural Science and Technology Innovation Project of the Chinese Academy of Agricultural Sciences, Crop Protection Machinery Team (Grant No. CAAS-ASTIP-CPMT), the Technology Innovation Guidance Plan of Gansu Province Science and Technology Project (Grant No. 21CX6NG291) and the Jiangsu Science and Technology Development Plan (BE2019305).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Q.; Ren, T.; Zhang, Y.; Li, X.; Gong, R.; Liu, S.; Fan, X.; Lu, J. Evaluating the application of controlled release urea for oilseed rape on Brassica napus in a regional scale: The optimal usage, yield and nitrogen use efficiency responses. Ind. Crop. Prod. 2019, 140, 111560. [Google Scholar] [CrossRef]
  2. Lu, J. Scientifc Fertilization Technology for Oilseed Rape; God Shield Press: Beijing, China, 2010. [Google Scholar]
  3. Delgado, M.; Felix, M.; Bengoechea, C. Development of bioplastic materials: From rapeseed oil industry by products toadded-value biodegradable biocomposite materials. Ind. Crop. Prod. 2018, 125, 401–407. [Google Scholar] [CrossRef]
  4. Szubert, K. Synthesis of organofunctional silane from rapeseed oil and its application as a coating material. Cellulose 2018, 25, 6269–6278. [Google Scholar] [CrossRef] [Green Version]
  5. Shim, Y.; Falk, K.; Ratanapariyanuch, K.; Reaney, M.J.T. Food and fuel from Canadian oilseed grains: Biorefinery production may optimize both resources. Eur. J. Lipid Sci. Technol. 2017, 119, 1438–7697. [Google Scholar] [CrossRef]
  6. Cong, R.; Wang, Y.; Li, X.; Ren, T.; Lu, J. Differential responses of seed yield and yield components to nutrient deficiency between direct sown and transplanted winter oilseed rape. Int. J. Plant. Prod. 2020, 14, 77–92. [Google Scholar] [CrossRef]
  7. Vollmann, J.; Rajcan, I. Oilseed Rape in Oil Crops; Springer: New York, NY, USA, 2009; pp. 91–126. [Google Scholar]
  8. Requier, F.; Odoux, J.F.; Tamic, T.; Moreau, N.; Henry, M.; Decourtye, A.; Bretagnolle, V. Honey bee diet in intensive farmland habitats reveals an unexpectedly high flower richness and a major role of weeds. Ecol. Appl. 2015, 25, 881–890. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, S.; Cai, C.; Li, J.; Sun, T.; Liu, X.; Tian, Y.; Xue, X. The airflow field characteristics of the unmanned agricultural aerial system on oilseed rape (Brassica napus) canopy for supplementary pollination. Agronomy 2021, 11, 2035. [Google Scholar] [CrossRef]
  10. Zhang, S.; Xue, X.; Chen, C.; Sun, Z.; Sun, T. Development of a low-cost quadrotor UAV based on ADRC for agricultural remote sensing. Int. J. Agric. Biol. Eng. 2019, 12, 82–87. [Google Scholar] [CrossRef]
  11. Wang, X.; Wang, M.; Wang, S.; Wu, Y. Extraction of vegetation information from visible unmanned aerial vehicle images. Trans. Chin. Soc. Agric. Eng. 2015, 31, 152–159. [Google Scholar]
  12. Zhang, S.; Qiu, B.; Xue, X.; Sun, T.; Peng, B. Parameters optimization of crop protection UAS based on the first industry standard of China. Int. J. Agric. Biol. Eng. 2020, 13, 29–35. [Google Scholar] [CrossRef]
  13. Xue, X.; Lan, Y.; Sun, Z.; Chang, C.; Hoffmann, W. Develop an unmanned aerial vehicle based automatic aerial spraying system. Comput. Electron. Agric. 2016, 128, 58–66. [Google Scholar] [CrossRef]
  14. Ahmad, F.; Qiu, B.; Dong, X.; Ma, J.; Huang, X.; Ahmed, S.; Chandio, F.A. Effect of operational parameters of UAV sprayer on spray deposition pattern in target and off-target zones during outer field weed control application. Comput. Electron. Agric. 2020, 172, 105350. [Google Scholar] [CrossRef]
  15. Huang, X.; Zhang, S.; Luo, C.; Li, W.; Liao, Y. Design and experimentation of an aerial seeding system for rapeseed based on an air-assisted centralized metering device and a multi-rotor crop protection UAV. Appl. Sci. 2020, 10, 8854. [Google Scholar] [CrossRef]
  16. Cai, G.; Dias, J.; Seneviratne, L. A Survey of small-scale unmanned aerial vehicles: Recent advances and future development trends. Unmanned Syst. 2014, 2, 175–199. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, G.; Wang, Q.; Luo, J. Positioning design of plant protection unmanned aerial vehicle based on Beidou navigation. Intell. Comput. Appl. 2017, 7, 46–49. [Google Scholar]
  18. Wang, Y. Research on autonomous navigation of agricultural UAV based on Beidou. Master’s Thesis, Northwest A & F University, Xi’an, China, 16 May 2017. [Google Scholar]
  19. Xu, B.; Chen, L.; Tan, Y.; Xu, M. Route planning algorithm and verification based on UAV operation path angle in irregular area. Trans. Chin. Soc. Agric. Eng. 2015, 31, 173–178. [Google Scholar]
  20. Huang, X.; Zhang, L.; Tang, L.; Li, X.; He, X. Path planning for autonomous operation of drone in fields with complex boundaries. Trans. Chin. Soc. Agric. Mach. 2020, 51, 34–42. [Google Scholar]
  21. Lan, Y.; Wang, L.; Zhang, Y. Application and prospect on obstacle avoidance technology for agricultural UAV. Trans. Chin. Soc. Agric. Eng. 2018, 34, 104–113. [Google Scholar]
  22. Zhou, J.; He, Y. Research progress on navigation path planning of agricultural machinery. Trans. Chin. Soc. Agric. Mach. 2021, 52, 1–14. [Google Scholar]
  23. Cao, G.; Li, Y.; Nan, F.; Liu, D.; Chen, C.; Zhang, J. Development and analysis of plant protection UAV flight control system and route planning research. Trans. Chin. Soc. Agric. Mach. 2020, 51, 1–16. [Google Scholar]
  24. Meng, Q.; Qiu, R.; He, J.; Zhang, M.; Ma, X.; Liu, G. Development of agricultural implement system based on machine vision and fuzzy control. Comput. Electron. Agric. 2015, 112, 128–138. [Google Scholar] [CrossRef]
  25. Yang, L.; Noguchi, N. Human detection for a robot tractor using omni-directional stereo vision. Comput. Electron. Agric. 2012, 89, 116–125. [Google Scholar] [CrossRef]
  26. Zhang, M.; Ji, Y.; Li, S.; Cao, R.; Xu, H.; Zhang, Z. Research progress of agricultural machinery navigation technology. Trans. Chin. Soc. Agric. Mach. 2020, 51, 1–18. [Google Scholar]
  27. Peng, S.; Kan, Z.; Li, J. Extraction of visual navigation directrix for harvesting operation in short-stalked and close-planting jujube orchard. Trans. Chin. Soc. Agric. Eng. 2017, 33, 45–52. [Google Scholar]
  28. Liu, Y.; Noguchi, N.; Liang, L. Development of a positioning system using UAV-based computer vision for an airboat navigation in paddy field. Comput. Electron. Agric. 2019, 162, 126–133. [Google Scholar] [CrossRef]
  29. Maravall, D.; Lope, J.D.; Fuentes, J.P. Vision-based anticipatory controller for the autonomous navigation of an UAV using artificial neural networks. Neurocomputing 2015, 151, 101–107. [Google Scholar] [CrossRef]
  30. Tang, Z.; Cunha, R.; Cabecinhas, D.; Hanel, T.; Silvstre, C. Quadrotor going through a window and landing: An image-based visual servo control approach. Control Eng. 2021, 112, 104827. [Google Scholar] [CrossRef]
  31. Opromolla, R.; Fasano, G. Visual-based obstacle detection and tracking, and conflict detection for small UAS sense and avoid. Aerosp. Sci. Technol. 2021, 119, 107167. [Google Scholar] [CrossRef]
  32. Available online: http://www.moa.gov.cn/govpublic/nybzzj1/201710/t20171011_5837449.htm (accessed on 1 November 2021).
  33. Ciocca, G.; Napoletano, P.; Schettini, R. CNN-based features for retrieval and classification of food images. Comput. Vis. Image Und. 2018, 176, 70–77. [Google Scholar] [CrossRef]
  34. Liu, Y.; Pu, H.; Sun, D.W. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends Food Sci. Tech. 2021, 113, 193–204. [Google Scholar] [CrossRef]
  35. Zhou, L.; Zhang, C.; Liu, F.; Qiu, Z.; He, Y. Application of deep learning in food: A review. Compr. Rev. Food. Sci. Food Saf. 2019, 18, 12492. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  37. Si, Y.; Jiang, G.; Liu, G.; Gao, R.; Liu, Z. Early stage crop rows detection based on least square method. Trans. Chin. Soc. Agric. Mach. 2010, 41, 163–167 + 185. [Google Scholar]
  38. Zhang, Q.; Wang, J.; Li, B. Extraction method for centerlines of rice seedings based on YOLOv3 target detection. Trans. Chin. Soc. Agric. Mach. 2020, 51, 34–43. [Google Scholar]
  39. Sun, K.; Zhang, T. A New GNSS interference detection method based on rearranged Wavelet-Hough Transform. Sensors 2021, 21, 1714. [Google Scholar] [CrossRef]
  40. Zeng, H.; Lei, J.; Tao, J.; Zhang, W.; Liu, C. Navigation line extraction method for combine harvester under low contrast conditions. Trans. Chin. Soc. Agric. Eng. 2020, 36, 18–25. [Google Scholar]
  41. Guan, Z.; Chen, K.; Ding, Y.; Wu, C.; Liao, Q. Visual navigation path extraction method in rice harvesting. Trans. Chin. Soc. Agric. Mach. 2020, 51, 19–28. [Google Scholar]
  42. Yang, Y.; Zhang, B.; Zha, J.; Wen, X.; Chen, L.; Zhang, T.; Dong, X.; Yang, X. Real-time extraction of navigation line between corn row. Trans. Chin. Soc. Agric. Eng. 2020, 36, 162–171. [Google Scholar]
Figure 1. OSR plant image collecting in the test field; (a) Static manual holding photography; (b) Low altitude hovering photography.
Figure 1. OSR plant image collecting in the test field; (a) Static manual holding photography; (b) Low altitude hovering photography.
Agriculture 12 00062 g001
Figure 2. A typical CNN architecture for OSR plant recognition and analysis.
Figure 2. A typical CNN architecture for OSR plant recognition and analysis.
Agriculture 12 00062 g002
Figure 3. The workflow for OSR plant recognition model construction and center line fitting.
Figure 3. The workflow for OSR plant recognition model construction and center line fitting.
Agriculture 12 00062 g003
Figure 4. The OSR plant captured by UAV; (a) Under cloudy day condition; (b) Under sunny day condition.
Figure 4. The OSR plant captured by UAV; (a) Under cloudy day condition; (b) Under sunny day condition.
Agriculture 12 00062 g004
Figure 5. The processed images of male OSR plant feature region; (a) The sequence image; (b) The denoised image; (c) The feature point image.
Figure 5. The processed images of male OSR plant feature region; (a) The sequence image; (b) The denoised image; (c) The feature point image.
Agriculture 12 00062 g005
Figure 6. CNN recognition accuracy analysis of different Conv layers with one FC layer; (a) Accuracy of C1 + FC1 CNN structure; (b) Accuracy of C2 + FC1 CNN structure; (c) Accuracy of C3 + FC1 CNN structure; (d) Accuracy of C4 + FC1 CNN structure; (e) Accuracy of C5 + FC1 CNN structure.
Figure 6. CNN recognition accuracy analysis of different Conv layers with one FC layer; (a) Accuracy of C1 + FC1 CNN structure; (b) Accuracy of C2 + FC1 CNN structure; (c) Accuracy of C3 + FC1 CNN structure; (d) Accuracy of C4 + FC1 CNN structure; (e) Accuracy of C5 + FC1 CNN structure.
Agriculture 12 00062 g006
Figure 7. CNN recognition accuracy analysis of different FC layers with three Conv layers; (a) Accuracy of C3 + FC1 CNN structure; (b) Accuracy of C3 + FC2 CNN structure; (c) Accuracy of C3 + FC3 CNN structure; (d) Accuracy of C3 + FC4 CNN structure; (e) Accuracy of C3 + FC5 CNN structure.
Figure 7. CNN recognition accuracy analysis of different FC layers with three Conv layers; (a) Accuracy of C3 + FC1 CNN structure; (b) Accuracy of C3 + FC2 CNN structure; (c) Accuracy of C3 + FC3 CNN structure; (d) Accuracy of C3 + FC4 CNN structure; (e) Accuracy of C3 + FC5 CNN structure.
Agriculture 12 00062 g007
Figure 8. (a) Cloudy day (b) Sunny day. Extraction and fitting results of male OSR plant row center line under different lighting conditions. Note: The red and blue solid lines are the male OSR row center lines extracted and fitted by the method in this paper.
Figure 8. (a) Cloudy day (b) Sunny day. Extraction and fitting results of male OSR plant row center line under different lighting conditions. Note: The red and blue solid lines are the male OSR row center lines extracted and fitted by the method in this paper.
Agriculture 12 00062 g008
Table 1. The OSR characteristics and weather conditions.
Table 1. The OSR characteristics and weather conditions.
Test TimeGrowth PeriodOSR Plant Mean Height (cm)Width of OSR Plant Line (m)Row Proportion of Male to FemaleMean Wind Speed (m/s)Mean
Temperature (°C)
1–10 March 2021flowering100 ± 5 (male)
80 ± 5 (female)
0.25 (male)
2.45 (female)
4:60.50 ± 0.2013.85 ± 0.30
Table 2. The main technical parameters of DJI Phantom 4.
Table 2. The main technical parameters of DJI Phantom 4.
ItemsParameters
Total weight1375 g
Field of view (FOV)Horizontal 60°, vertical ± 27° (front and rear)
70° front and rear, 50° left and right (down)
Camera sensor1 inch CMOS, 20 million effective pixels
Lens8.8 mm/24 mm, f/2.8–f/11
Focus, apertureAutomatic
Image format and memoryJPEG, secure digital (SD) memory card
Shutter speed8–1/8000 s, electronic shutter, remote control trigger
8–1/2000 s, mechanical shutter
App for mobile devicesDJI GO 4
Communication frequency2.4 GHz and 5.8 GHz
Table 3. The recognition model training results based on different image sizes.
Table 3. The recognition model training results based on different image sizes.
Segmented
Size (pix × pix)
Training
Accuracy (%)
ARMA (%)LFVTRT for Each Image (s)
40 × 4010093.540.20590.06
20 × 2010090.860.35120.03
10 × 1010085.370.32220.02
Table 4. LFVs of different Conv layers with one FC layer.
Table 4. LFVs of different Conv layers with one FC layer.
CNN ConstructureLFV
C1 + FC1 0.2918
C2 + FC10.3655
C3 + FC10.2059
C4 + FC10.3227
C5 + FC10.3601
Table 5. LFVs of different FC layers with three Conv layers.
Table 5. LFVs of different FC layers with three Conv layers.
CNN ConstructureLFV
C3 + FC1 0.2059
C3 + FC20.2367
C3 + FC30.2561
C3 + FC40.2754
C3 + FC50.3011
Table 6. Comparison of two line-fitting methods.
Table 6. Comparison of two line-fitting methods.
MethodImage QuantityARMA (%)Average Single Image Time-Consume (s)
LSM20097.500.20
Hough transform20085.500.26
Table 7. Recognition and fitting results of different lighting conditions.
Table 7. Recognition and fitting results of different lighting conditions.
Weather ConditionImage QuantityARMA (%)Average Single Image
Time-Consume (s)
Average RMSE of Angle (°)
Cloudy day50981.183.22
Sunny day50941.721.36
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, Z.; Guo, X.; Xu, Y.; Zhang, S.; Cheng, X.; Hu, Q.; Wang, W.; Xue, X. Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying. Agriculture 2022, 12, 62. https://doi.org/10.3390/agriculture12010062

AMA Style

Sun Z, Guo X, Xu Y, Zhang S, Cheng X, Hu Q, Wang W, Xue X. Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying. Agriculture. 2022; 12(1):62. https://doi.org/10.3390/agriculture12010062

Chicago/Turabian Style

Sun, Zhu, Xiangyu Guo, Yang Xu, Songchao Zhang, Xiaohui Cheng, Qiong Hu, Wenxiang Wang, and Xinyu Xue. 2022. "Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying" Agriculture 12, no. 1: 62. https://doi.org/10.3390/agriculture12010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop