Next Article in Journal
Industrial Agglomeration, Land Consolidation, and Agricultural Energy Inefficiency in China: An Analysis Using By-Production Technology and Simultaneous Equations Model
Previous Article in Journal
Risk Assessment of Heavy Metal Accumulation in Cucumber Fruits and Soil in a Greenhouse System with Long-Term Application of Organic Fertilizer and Chemical Fertilizer
Previous Article in Special Issue
Inversion of Cotton Soil and Plant Analytical Development Based on Unmanned Aerial Vehicle Multispectral Imagery and Mixed Pixel Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants

1
School of Geography & Environmental Science, School of Karst Science, Guizhou Normal University, Guiyang 550025, China
2
State Key Laboratory Incubation Base for Karst Mountain Ecology Environment of Guizhou Province, Guiyang 550025, China
3
State Engineering Technology Institute for Karst Desertification Control, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(11), 1871; https://doi.org/10.3390/agriculture14111871
Submission received: 12 September 2024 / Revised: 19 October 2024 / Accepted: 22 October 2024 / Published: 23 October 2024
(This article belongs to the Special Issue Application of UAVs in Precision Agriculture—2nd Edition)

Abstract

:
The exploration of the impact of different spatial scales on the low-altitude remote sensing identification of Chinese cabbage (Brassica rapa subsp. Pekinensis) plants offers important theoretical reference value in balancing the accuracy of plant identification with work efficiency. This study focuses on Chinese cabbage plants during the rosette stage; RGB images were obtained by drones at different flight heights (20 m, 30 m, 40 m, 50 m, 60 m, and 70 m). Spectral sampling analysis was conducted on different ground backgrounds to assess their separability. Based on the four commonly used vegetation indices for crop recognition, the Excess Green Index (ExG), Red Green Ratio Index (RGRI), Green Leaf Index (GLI), and Excess Green Minus Excess Red Index (ExG-ExR), the optimal index was selected for extraction. Image processing methods such as frequency domain filtering, threshold segmentation, and morphological filtering were used to reduce the impact of weed and mulch noise on recognition accuracy. The recognition results were vectorized and combined with field data for the statistical verification of accuracy. The research results show that (1) the ExG can effectively distinguish between soil, mulch, and Chinese cabbage plants; (2) images of different spatial resolutions differ in the optimal type of frequency domain filtering and convolution kernel size, and the threshold segmentation effect also varies; (3) as the spatial resolution of the imagery decreases, the optimal window size for morphological filtering also decreases, accordingly; and (4) at a flight height of 30 m to 50 m, the recognition effect is the best, achieving a balance between recognition accuracy and coverage efficiency. The method proposed in this paper is beneficial for agricultural growers and managers in carrying out precision planting management and planting structure optimization analysis and can aid in the timely adjustment of planting density or layout to improve land use efficiency and optimize resource utilization.

1. Introduction

Chinese cabbage is widely cultivated globally, with the majority of planting concentrated in the Asian region. China stands as the world’s largest producer of Chinese cabbage, with the sown area accounting for approximately 15% of the country’s total vegetable sown area, making it one of the vegetables with the largest cultivation area [1]. Its growth requires a mild climate, reasonable planting density, timely pest and disease management, the avoidance of continuous cropping, and a suitable planting season, all of which, together, ensure healthy growth and good yield. Precision agriculture is a fundamental way of achieving high efficiency and sustainable agriculture with a low input [2,3]. By conducting precise identification of Chinese cabbage plants during the rosette stage, it is possible to further optimize resource utilization, enhance crop yield and quality, and reduce production costs. Currently, research has used deep learning models [4,5,6,7], machine learning methods [8,9,10], and multispectral imaging systems [11] for information extraction, recognition, and growth monitoring in Chinese cabbage. However, different unmanned aerial vehicle (UAV) flight heights and various growth stages of crops lead to different remote sensing identification characteristics in Chinese cabbage, and this differentiation is particularly crucial to precision agriculture. The aforementioned remote sensing identification technologies generally face issues such as complex design and training processes and high data redundancy in feature extraction methods. These issues may reduce the efficiency of data analysis and application, affecting the implementation of precision agriculture. Currently, there is a lack of research on the impact of multi-scale effects in low-altitude remote sensing scenarios on the remote sensing identification of Chinese cabbage plants. It is necessary to explore the influence of identification features at different flight heights and how multi-scale effects affect recognition accuracy, to provide more reliable data support for precision agriculture and assist farmers in achieving more efficient, environmentally friendly, and economical agricultural production.
In the context of precision agriculture, the application of UAV remote sensing technology is important in crop identification and field management. UAV remote sensing technology, with its strong timeliness, high spatial resolution, reusability, low application cost [12], and flexible and convenient operation [13], has become an important means of crop identification. Drones equipped with various sensors such as visible light, hyperspectral, multispectral, thermal infrared, and LiDAR capture images and spectral information of crops from the air. These data are used to analyze the growth conditions of crops, vegetation indices, and other key information, enabling the precise identification and monitoring of crops [14,15,16,17,18,19,20,21]. The application of UAV remote sensing in the precise identification of crops provides important technological support for modern agricultural production. This technology enables the identification, information extraction, classification, and monitoring of the growth status of different crop objects, thereby achieving precise fertilization, irrigation, and pest and disease control. It not only effectively enhances agricultural production efficiency and crop quality but also promotes the rational use of resources and sustainable development, laying the foundation for an efficient and environmentally friendly modern agricultural model. In terms of identification targets, UAV remote sensing technology can be used to identify different types of crops. Huang et al. [22] identified and calculated the different growth states of Chinese yam plants based on UAV visible light imagery. Zhou et al. [23] and Huang et al. [24] used low-altitude remote sensing technology with UAVs to achieve the identification and extraction of dragon fruit plants. Li et al. [25] and Huang et al. [26] achieved the identification of tobacco plants. In the field of crop information extraction, Yang et al. [27] conducted the identification and extraction of winter wheat lodging area information based on UAV remote sensing imagery. In the area of crop classification, Böhler et al. [28] used UAV data to achieve crop recognition and classification in a heterogeneous arable landscape. In crop monitoring, Kim et al. [29] developed a crop growth estimation model based on UAV visible light images to quantify various biophysical parameters of Chinese cabbage throughout the growing season for a quantitative analysis of its growth status.
The type and performance of sensors carried by UAVs, the flight altitude of UAVs, the growth stages of crops, and data processing techniques have all become factors that affect crop identification [30]. Among them, the flight altitude of UAVs directly affects the spatial resolution of the sensors that they carry, and the choice of UAV flight altitude is closely related to the scale effect [31]. At different flight altitudes, due to changes in scale, UAV remote sensing technology may observe different surface features and patterns. A lower flight altitude can provide a higher spatial resolution, allowing for a more detailed observation of surface features, including the identification of crop plants. However, as the flight altitude increases, the spatial resolution decreases, which may result in the surface features becoming blurred, affecting the ability to identify crop details. In crop identification, the spatial scale effect manifests as differences in the expression of crop features in remote sensing imagery at different scales [32,33,34]. Therefore, studying the impact of the spatial scale effect on crop identification is of great significance in improving the accuracy and precision of crop identification. Ma et al. [35] improved the U-Net on the basis of multi-scale input and attention mechanisms, effectively and accurately identifying Chinese cabbage and weeds, providing technical support for the study of crop identification in terms of scale effects; Zhang et al. [36] explored how to select the appropriate spatial scale to improve the classification accuracy of crops by conducting multi-scale quantitative analysis of image classification features through wavelet transformation; and Lu et al. [32] proposed a multi-scale feature fusion semantic segmentation model MSSNet for crop identification, achieving refined image classification. Although these studies have achieved good results on specific datasets, they may require more computational resources. In practical applications, cost effectiveness needs to be considered, which could be a limiting factor. Additionally, the impact of various real-field conditions, such as different occlusions and crop density, is also an important consideration for the model.
Currently, although there have been certain achievements in the research and application of UAV visible light remote sensing in agriculture, there is still the problem that most research focuses on field crops such as wheat, soybeans, cotton, corn, and rice, while there is relatively less research on forestry, fruits, vegetables, and tubers. Secondly, due to changes in scale, the surface features and patterns observed by UAV remote sensing technology may differ. A lower flight altitude can provide more detailed information about crop plants, but the observation range is limited. On the contrary, a higher flight altitude can cover a larger surface area but at the expense of the ability to observe crop details. This trade-off needs to be determined based on specific monitoring purposes and requirements to find the appropriate flight altitude. Therefore, it is necessary to further explore whether multi-scale imagery can effectively improve recognition accuracy and reveal the impact of scale effects on the identification of leafy crops. At the same time, research is needed to determine whether there is a specific altitude range that can balance recognition accuracy with work efficiency, ensuring an optimal flight altitude range for high recognition accuracy.
Based on existing research, this article will focus on Chinese cabbage in the rosette stage as the research subject, utilizing visible light images captured by drones for analysis. Images were acquired from different flight heights (20 m, 30 m, 40 m, 50 m, 60 m, and 70 m), and spectral feature analysis was conducted for different ground object backgrounds within the study area to explore the separation characteristics between different types of ground objects. Four widely used color indices were selected, including the Excess Green Index (ExG), Red Green Ratio Index (RGRI), Green Leaf Index (GLI), and Excess Red Minus Excess Green Index (ExG-ExR), to further determine the best-performing index. On this basis, a series of image processing techniques were applied, including frequency domain filtering, threshold segmentation using the Otsu method, and morphological filtering, to compare the performance of different processing methods in terms of crop identification effectiveness and accuracy. This study aimed to enhance adaptability in scenarios with limited resources or the need for flexible deployment by reducing costs and assisting growers and managers in optimizing planting structure analysis. This will enable them to adjust planting density and layout, improve land use efficiency, optimize resource allocation, and achieve the effective identification and monitoring of crop plants. The main objectives include (1) revealing the impact of spatial scale effects on the accuracy of UAV remote sensing in identifying Chinese cabbage plants and exploring the optimal flight height range that balances recognition accuracy with work efficiency and (2) analyzing the influence of different image processing methods at different scales on the recognition accuracy.

2. Materials and Methods

2.1. Study Area

The study area is located in the Vegetable Industry Demonstration Park, Xixiu District, Anshun City, Guizhou Province, with central coordinates of 26°23′9″ N and 106°7′51″ E. This region experiences distinct seasons, with mild winters and summers without extreme heat. The climate is temperate with relatively low solar radiation, and the average temperature ranges from 13.2 to 15 °C. Although rainfall is abundant, its distribution is uneven, with more precipitation in the northwest and less in the east, leading to a long-term average annual rainfall of 1250–1400 mm. Based on national agricultural climate classification, the area falls within the “Eastern Monsoon Agricultural Climate Zone” of the country. The natural vegetation types and soil varieties in the area are diverse, including seven soil types, such as paddy soil, limestone soil, yellow soil, and yellow-brown soil. The main soil utilization type is paddy soil, with a larger area of cultivated land and smaller areas of pasture and forest land. The growth of Chinese cabbage requires fertile and well-drained soil and an appropriate amount of water and is well suited to crop rotation. The planting time varies by region and variety, with spring and autumn being the typical seasons for planting. However, some varieties are adapted for cultivation in the summer or winter. Currently, in the study area, Chinese cabbage is commonly transplanted in the autumn and harvested in the winter. There are many varieties suitable for cultivation but few varieties suitable for spring and summer planting, and white plastic mulch is widely used in the planting process. As shown in Figure 1, the study area consists of images collected by a UAV-mounted visible light flight platform (DJI Technology Co., Ltd., Shenzhen, China), featuring different growth conditions and background characteristics.

2.2. Data Acquisition and Processing

The rosette stage is a critical phase in the growth of Chinese cabbage, where leaves overlap and form a more compact plant structure, which has a decisive impact on the final yield and quality of the cabbage. Weather conditions, light intensity, and uniformity can affect the image features and quality of the cabbage plants, resulting in different areas of shadows on the images. The study area is characterized by frequent clouds and rain, with a short sunshine duration. To avoid the loss of textural feature information in the images due to various climatic factors, the image acquisition time was set during the rosette stage of cabbage growth (30 October 2021, at 15:00). The weather is cloudy with good visibility, and the wind force is at level 1, which met the safety requirements for drone operations. This study used the DJI Phantom 4 Pro V2.0 quadcopter as the image data collection platform. Its compact size and flexibility make it suitable for data collection in mountainous environments with steep terrain and fragmented plots, as it does not require a specialized landing site. The drone is equipped with a 1-inch CMOS sensor Hasselblad camera, with 20 million pixels, and a photo resolution of 5472 × 3648 pixels. The lens has an equivalent focal length of 28 mm, and the maximum wind resistance is level 5. The sensor captures light in the three primary colors, red, green, and blue, thereby creating a color image. The center wavelength of the red band (Red, R) is approximately 650 nm, with a bandwidth of about 16 nm; the wavelength range for the green band (Green, G) is 560 nm ± 16 nm; and the center wavelength for the blue band (Blue, B) is around 450 nm, with a bandwidth of approximately 16 nm. During the flight mission, waypoint hovering was used to ensure remote sensing image quality, with an 80% forward overlap and 75% side overlap, resulting in clear and high-quality images. The Pix4Dmapper 4.4.12 software was used for image preprocessing, including initialization, spatial correction, and point cloud densification. This process corrects deformations, distortions, blurriness, and noise caused by drone vibrations, and performs image enhancement, color normalization, cropping, and reconstruction to obtain visible light remote sensing images of the study area at different flight altitudes.

2.3. Research Method

Based on visible light images acquired from drones at different altitudes, preprocessing is carried out. Firstly, based on the geometric size, color, and texture characteristics of the Chinese cabbage plants, as well as the spectral analysis of their good separability from the mulch film and soil, four color indices widely used for extracting vegetation information are compared: the ExG [37], RGRI [37], GLI [38], and ExG-ExR [39]. The optimal color index is selected. Secondly, the results based on the optimal color index are enhanced using frequency domain filtering methods, preserving the high-frequency information of plants. Subsequently, threshold segmentation (Otsu) is employed on the enhanced image to extract the optimal threshold, compensating for the misclassification and omissions that may arise from using a single color index. To further improve recognition accuracy, the most suitable morphological filtering operation is selected to remove weed noise. Finally, in combination with field investigations, the results of images captured at altitudes of 20 m, 30 m, 40 m, 50 m, 60 m, and 70 m are statistically analyzed, and the algorithm performance is validated to achieve the identification and extraction of Chinese cabbage plants and quantitatively assess the feasibility of the method. The technical route is illustrated in Figure 2.

2.3.1. Vegetation Color Index

In combination with the sensors carried by the UAV remote sensing platform, this paper selects four commonly used color indices in UAV remote sensing imagery, namely, the ExG, RGRI, GLI, and ExG-ExR, for color index method comparison. Among them, the ExG is often used in the field of agricultural identification for the automatic separation of crops from soil [40], and studies have shown that the use of the Excess Green Index can effectively separate crops from soil [41,42]. The calculation formulas for the four color indices are as follows:
ExG = 2 × G R B
RGRI = R / G
G L I = 2 × G R B / 2 × G + R + B
E x G E x R = 3 × G 2.4 × R B
In the formula, R represents the red band, G represents the green band, and B represents the blue band.

2.3.2. Image Frequency Domain Filtering

The image calculated by the color index has noise that interferes with the recognition of the Chinese cabbage plants, and the noise of the target image needs to be suppressed to enhance the edge and texture information of the plants. Therefore, the image frequency domain filtering method is used to filter the low-frequency information in the image to retain the high-frequency information while suppressing the slowly changing background, which is beneficial in segmenting and recognizing the target. In this study, high-pass processing is selected to eliminate the low-frequency components in the image to enhance the texture and edge information of the plant, and Gaussian high-pass processing is selected to enhance the detailed information, enhance the high-frequency components of the image, and reduce the low-frequency components for better enhancement. Gaussian high-pass processing is selected to enhance the detail information, enhance the high-frequency component of the image, and reduce the low-frequency component, to better enhance the display of small features and thin lines. By comparing the recognition effects of the two filters during image processing, the optimal filter for recognizing Chinese cabbage plants in UAV visible images at different scales is explored.

2.3.3. Image Thresholding Segmentation

Image segmentation techniques can divide digital images into multiple groups of pixels [43], with the primary goal of simplifying or converting the corresponding image samples into images that are easier to analyze. Among these, the thresholding method is one of the most widely used image segmentation techniques due to its simplicity [44]. Typically, thresholding techniques can be categorized into global thresholding and local thresholding. The Otsu thresholding method used in this study, also known as the maximum inter-class variance method, is one of the most widely applied threshold algorithms due to its robustness and adaptability. The Otsu method is an extraction technique that automatically selects a global threshold by statistically analyzing the characteristics of the entire histogram [45,46]. It divides the image into two backgrounds based on the grayscale characteristics of the image. The larger the inter-class variance between the two backgrounds, the greater the difference between the two parts, and the maximum value of the inter-class variance is the optimal threshold [47]. Its calculation process is simple and stable [45], capable of fast arithmetic, and has a small probability of misclassification [48], which is suitable for recognizing and extracting Chinese cabbage targets. Defining the image I (x, y), the threshold value is T, and the calculation formula [49] is as follows:
W 0 = N 0 / M N
W 1 = N 1 / M N
N 0 + N 1 = M N
W 0 + W 1 = 1
M = W 0 μ 0 + W 1 μ 1
σ = W 0 μ 0 μ 2 + W 1 μ 1 μ 2
In the formula, W0 is the proportion of target pixels in the entire image, with an average gray level of μ0; the image size is MN; and N0 is the number of pixels with gray levels less than T. W1 is the proportion of background pixels in the entire image, with an average gray level of μ1; N1 is the number of pixels with gray levels greater than T. μ is the overall average gray level of the image, and σ represents the inter-class variance. The threshold T that maximizes σ is the optimal threshold.

2.3.4. Morphological Filtering

When target extraction is performed based on UAV visible light images, it is found that there are a small number of scattered weeds and leaves in the field, which are difficult separate directly due to the similarity of their color characteristics with Chinese cabbage, which makes some noise in the segmented image. Therefore, the segmented image needs to be further processed to improve recognition accuracy. The open filter in the morphological method is used to smooth the edge information of the image, eliminate the isolated pixels, and sharpen the maximum and minimum value information in the image; the closed filter is used to smooth the edge information of the image, fuse the narrow slits and slender parts, eliminate the holes in the image, and fill the gaps in the image edges. By comparing the edge smoothness, image noise condition, and the number of holes of the two filters in the recognition process, the optimal morphological filter suitable for Chinese cabbage plant recognition is explored.

2.3.5. Precision Evaluation

The applicability of this method was quantitatively evaluated by referring to previous research methods [50,51,52] and combining the characteristics of the study area. FP, TP, and FN are defined as follows in the identification and extraction results: FP represents the cabbage plants that are misclassified, TP represents the Chinese cabbage plants that are correctly classified, and FN represents the cabbage plants that are not extracted. The evaluation indexes of the branching factor (BF), missing factor (MF), detection rate (DP), and completeness (QP) were calculated as follows [22]:
B F = F P / T P
M F = F N / T P
D P = T P / ( T P + F N ) × 100 %
Q P = T P / T P + F N + F P × 100 %
In this context, BF is directly proportional to the number of misclassifications, while MF is inversely proportional to the number of correct classifications. DP represents the percentage of plants correctly identified; QP reflects the quality of the extraction, with higher values indicating better extraction results. Overall, the smaller the branch factor and the omission factor, the higher the detection rate and completeness, indicating a better extraction performance [32].

3. Results and Analysis

3.1. Image RGB Value Extraction and Analysis

Different types of land cover have distinct characteristic values in the red (R), green (G), and blue (B) bands [53]. In the visible light spectrum, plants have a high reflectance of green light while strongly absorbing red and blue light. Based on this characteristic, the combination of the red, green, and blue bands can enhance the distinction between vegetation and other land features [54]. The Chinese cabbage in the study area is covered with white plastic film, and the soil contains unrecovered fragments of the film, withered grass, and leaves scattered between the rows during field management, in order to obtain the spectral characteristics of relevant ground objects from visible light images captured by drones and to further analyze the separability and indicative nature among these objects. Taking into account the different growth conditions and background characteristics, a uniform sampling length is used to collect spectral samples of Chinese cabbage, soil, mulch film (including situations with and without water droplets), and Chinese cabbage–soil and Chinese cabbage–mulch film in the study area. A spectral feature map composed of three channels—red, green, and blue—is constructed, as shown in Figure 3.
Comparative analysis of the reflection characteristics of the soil, plastic film, and Chinese cabbage plants in the red, green, and blue bands based on drone visible light images reveals the following: (1) the plant image (a) shows that both regions exhibit high reflection characteristics in the green band, while the reflection rates in the red and blue bands are lower. The yellow region shows that the values in the red and green bands gradually approach each other without crossing, indicating good separability between different bands. (2) The soil background (b) is uniformly lit, with reflection rates decreasing sequentially in the red, green, and blue bands. The red band shows a clear indication and has good separability from the blue and green bands. (3) In the plastic film background (c), when there is water on the film, the reflection characteristics of the red band are strong. This is primarily due to the fact that the wet film predominantly reflects the underlying soil, which results in a clear separability from the green band. When the film is dry, the blue band values of the background range between 230 and 260, while the red and green band values are between 220 and 230. The film shows good indication in the blue band and maintains good separability from the green band. (4) The image of the green-leaved plant and soil (d) shows that the difference in the red- and green-band values on both sides of the curve is between 30 and 50, indicating good separability between the green-leaved plants and the soil. (5) The image of the yellow-leaved plant and soil (e) reveals that although the difference between the green and red bands decreases compared to the green-leaved plants, there is no crossing or overlap between the two bands, maintaining good separability. (6) The image of the plant and plastic film (f) indicates a high degree of separation between the plastic film background and the plants. Although the reflection degree of the water-droplet-covered plastic film background is close to that of the soil, it can still be effectively separated from the plants.

3.2. Comparison of Color Index Calculation Results

By calculating four color indices, it is possible to separate plants, soil, and mulch in drone imagery. Combined with field survey data, considering the overlap of leaves among crops due to factors such as planting time and management practices, we can conclude the following (Table 1):
(1)
In the case of multiple overlapping Chinese cabbage leaves, the performance of the ExG and ExG-ExR is superior to that of RGRI and GLI. However, the results for the ExG-ExR show that the edge information is not clear enough, with connected edges between plants. In contrast, the ExG has clear edges, successfully eliminating background leakage components, while enhancing other crops and weeds around the planting area and providing better separation from the soil. Images processed with the ExG Index can retain information about Chinese cabbage plants relatively well, but there is still some noise and minor background leakage components.
(2)
In the calculation results of RGRI, there is poor separability between individual plants, as well as between plants and soil or mulch. Observations indicate that the edges of individual plants exhibit an expansion phenomenon. Furthermore, when water droplets are attached to the mulch, the background leakage issue becomes more severe, leading to further reduced separability between plants and mulch.
(3)
The analysis results of GLI indicate that in areas where multiple Chinese cabbage leaves overlap, the separation degree between plants and soil is low, leading to some confusion of background information. Additionally, the clarity of the leaf edges is insufficient, resulting in a low contrast between plants and soil.

3.3. Image Frequency Domain Filtering and Otsu

In order to more accurately identify Chinese cabbage plant information and suppress noise that interferes with recognition, thereby enhancing the edge and texture information of the plants, based on the calculation results of the best color index selected—the ExG, high-pass filtering, or Gaussian high-pass filtering method in the frequency domain is first used to filter out the low-frequency information in the image to retain the high-frequency information in the image, suppressing the slowly changing background. Then, Otsu’s threshold segmentation method is used for processing. Through multiple manual interactive experiments, it was found that the best filtering type and convolution kernel size corresponding to images of different spatial resolutions are different. The effects of frequency domain filtering and threshold segmentation processing are shown in Table 2.
(1)
Images of Chinese cabbage at altitudes of 20 m to 40 m are not suitable for Otsu’s method after frequency domain filtering. The images segmented using frequency domain filtering exhibit large areas of salt-and-pepper noise, with numerous holes in the center of the plants and unclear edges. Performing threshold segmentation based on the ExG better captures the edge information of the Chinese cabbage target.
(2)
Images of Chinese cabbage at 50 m flight altitude are suitable for Otsu after high-pass processing. Compared with threshold segmentation based on the ExG, the results after high-pass processing can better improve the separability between plants and have less salt-and-pepper noise. However, when performing threshold segmentation on the basis of Gaussian high-pass processing, a large amount of salt-and-pepper noise occurs, leading to poor separability between plants and the background.
(3)
Chinese cabbage images captured at an altitude of 60 m to 70 m are suitable for threshold segmentation based on Gaussian high-pass processing. After threshold segmentation based on high-pass processing, the identified targets are highly confused with the background, and the separability between plants is poor, which is not conducive to the identification and extraction of cabbage targets. In contrast, compared to images segmented after processing with the ExG, images segmented after Gaussian high-pass processing show higher separation between plants, which is beneficial in improving recognition accuracy.
(4)
As the shooting altitude of the drone relative to the ground increases, the alignment of the processed image results with the edges of the plants gradually decreases, and the separation between the plants also decreases correspondingly. Within the altitude range of 20 m to 50 m, the optimal convolution kernel size for frequency domain filtering is “19 × 19” or “21 × 21”. As the altitude increases, the optimal size of the convolution kernel also increases.

3.4. Comparison of Morphological Filtering Processing Results

Morphological operations based on the threshold segmentation processed binary image to select the optimal morphological filtering and its optimal structural element size can further improve the separation between plants, eliminate pretzel noise, and replenish the holes. Among them, “opening” is used to corrode the image first and then uses the same structural elements (transformation kernel) for expansion filtering. Meanwhile, “closing” is used to fill the holes of the image first and then uses the same structural elements for erosion. By selecting an appropriate structuring element for erosion to remove noise and then using dilation to amplify the detailed features of the target, the original contour of the object will not be entirely altered [48]. The results of morphological filtering based on the Otsu threshold segmentation method are shown in Figure 4; the recognition results after the “closing” process still exhibit noise interference and poor separability between the plants, which can affect the accuracy of recognition to some extent. In contrast, the results after the “opening” process better preserve the edge information of the plants, with smoother edges compared to the results from the “closing” process. Additionally, this process effectively eliminates small noise and can segment connected plants, resulting in higher recognition accuracy. Therefore, choosing the “opening” process for handling the binary image after threshold segmentation is more beneficial in the recognition and extraction of Chinese cabbage targets.

3.5. Accuracy Verification

After vector conversion based on the results of the “opening” processed raster data, it was found that due to the gaps between the leaves of the Chinese cabbage plants, background leakage occurred, and the plants had holes, which increased the number of false identifications. These holes were all coded as “0” or “−1” in the converted vectors, and after removing this part, the accuracy of the opening operation results under different heights and window size was compared (Table 3). The results showed that both the branch factor and the omission factor were small, and the number of misclassified and undetected Chinese cabbage plants was also very low. Moreover, the detection rate (DP) and completeness (QP) were both high, and the extracted results were close to the actual number of plants, indicating that the method presented in this paper has good applicability. Among these, the QP value reflects the quality of the extraction, and the higher the QP value, the better the extraction effect.
After precision comparison, it was found that the optimal window size for Chinese cabbage imagery varies with different flight altitudes. Generally speaking, the larger the window, the lower the number of Chinese cabbage plants misidentified; conversely, as the window size increases, the number of missed extractions increases. Among them, in the imagery at an altitude of 20 m, using a 9 × 9 window yields the best extraction effect. At this time, the number of misclassified cabbage plants is the lowest, the branch factor (BF) value is the lowest, and the QP value is the highest. For imagery at an altitude of 30 m, a 5 × 5 window can achieve the best extraction effect, with fewer missed and misidentified plants and the highest QP value. For imagery at altitudes of 40–70 m, selecting a 3 × 3 window can obtain the highest extraction accuracy, with both DP and QP values reaching their highest levels. Although the number of correctly identified plants in the 20 m imagery is close to the actual number of plants, the number of misidentified plants is relatively high, making it difficult to distinguish between weeds and Chinese cabbage plants. Relatively speaking, the QP values at 60 m and 70 m are both above 99%, but the missed identification rate is high, and the fit with the edges of the plants is lower. Therefore, when the flight altitude is within the range of 30–50 m, both the number of missed plants and the number of misidentified plants are low and the gap between the identification results and the actual values is also small. The drone imagery collected within this altitude range yields the best results for Chinese cabbage identification.

4. Discussion

4.1. Effect of Different Flight Altitudes on Recognition Accuracy

When the drone captures images at a lower flight altitude, the spatial resolution of the imagery is higher, which allows for a clearer capture of plant details and edge information, aiding in the improvement of the recognition accuracy of individual Chinese cabbage plants. These high-resolution images perform excellently in plant recognition, especially within the flight altitude range of 20–40 m. However, within this altitude range, the number of misidentifications increases due to the interference of weeds scattered during field management and the remnants of leaves. These weeds and leaves are highly similar to poorly growing Chinese cabbage plants in terms of color and area, leading to frequent misjudgments [55]. When the flight altitude is lowered, although the recognition ability for weeds and leaves is enhanced, the overall recognition accuracy is reduced, because the increase in the number of misidentifications negatively affects the final recognition results. Moreover, at lower flight altitudes, the coverage area is smaller, requiring additional flight paths to cover the entire field, which increases the workload for data collection and processing time, reducing the overall work efficiency. As the drone’s flight altitude increases (50–70 m), the field of view expands, and the area covered in a single shot increases, which is beneficial in improving work efficiency and reducing the number of required flight paths [56,57]. However, the spatial resolution of the imagery gradually decreases and details become blurred, and this subsequently affects the quality of the recognition results [58]. The research results show (Table 3) that, when the flight altitude reaches 50–70 m, the number of missed identifications increases, becoming the main source of accuracy loss in the recognition process due to low-resolution imagery. Additionally, as the flight altitude increases, the separation between plants also decreases, exacerbating the situation of missed identifications. The study found that performing the recognition of Chinese cabbage plants during the tufting stage within a flight altitude range of 30 m to 50 m can provide sufficient resolution to ensure recognition accuracy and also allows a reasonable field of view to improve work efficiency, effectively addressing the balance between recognition accuracy and work efficiency in the context of precision agriculture.
Although higher flight altitudes can reduce the number of misidentifications, the number of missed identifications tends to increase. This phenomenon further confirms the insufficient recognition capability of drone imagery data when collected at high altitudes for unevenly growing Chinese cabbage plants, which can easily lead to the actual recognition number being less than the true number, thereby reducing the overall recognition accuracy [59]. Therefore, to improve the recognition and estimation accuracy of Chinese cabbage plants, future research needs to focus on optimizing drone flight altitude and image processing techniques to achieve balance in the relationship between misidentifications and missed identifications.

4.2. Effect of Frequency Domain Filtering Processing on Recognition Accuracy on Different Scales

Differences in spatial resolution across various scales of imagery can impact the optimal type of filtering and the size of the convolution kernel, resulting in varying effects of threshold segmentation [60]. For high-resolution images, which contain rich details, a more delicate filter is typically required to avoid excessive smoothing and the loss of critical information [61]. For instance, when using Gaussian filtering, selecting a smaller convolution kernel can effectively preserve the high-resolution characteristics of the image, and the threshold must be set more meticulously to achieve precise segmentation between different areas. In contrast, for low-resolution images with fewer details, a larger convolution kernel can be used for smoothing to reduce noise [62,63]. However, it is important to avoid excessive blurring, which may require the use of a broader threshold range to accommodate the coarseness of the image. Additionally, the application of different convolutional filtering can effectively eliminate salt-and-pepper noise generated during the recognition process, enhance the separation between plants, and thereby improve recognition accuracy. For high-resolution images, threshold segmentation based on the ExG Index yields better results. Specifically, Chinese cabbage imagery captured at an altitude of 20 m to 40 m is not suitable for threshold segmentation after frequency domain filtering, while imagery at an altitude of 50 m is more suitable for high-pass processing. For Chinese cabbage imagery at altitudes of 60 m to 70 m, threshold segmentation following Gaussian high-pass processing performs better. In high-resolution 20 m to 50 m drone images, the optimal convolution kernel size is smaller, with 19 × 19 or 21 × 21 kernels recommended, whereas in lower-resolution 60 m and 70 m drone images, larger kernels like 65 × 65 and 67 × 67 should be used to effectively reduce noise.

4.3. Effect of Morphological Filtering Processing on Recognition Accuracy at Different Scales

During field management, a small number of weeds often appear among the plants. These weeds are similar to Chinese cabbage in color characteristics, making it difficult to effectively distinguish them during target extraction. This similarity leads to a certain degree of weed noise in the segmented images, thereby interfering with the accurate counting of crops and the assessment of their growth conditions. Therefore, morphological filtering becomes particularly important, as it can enhance the resolution difference between crops and weeds. After morphological filtering, we observed that the optimal window size varies for images of different spatial resolutions, as follows: (1) As the spatial resolution of the imagery decreases, the required window size also decreases. At higher spatial resolutions, using a smaller window can make it difficult to effectively distinguish between Chinese cabbage plants and weeds, whereas at lower resolutions, setting the window size too large may lead to an increase in the number of missed identifications, thereby affecting the recognition accuracy. (2) When the window size is increased, the number of missed identifications for images of different spatial resolutions shows varying degrees of increase, while the number of false identifications correspondingly decreases. Overall, there is a negative correlation between increasing window size and decreasing spatial resolution: the lower the resolution, the fewer the false identifications, but the more missed identifications. (3) The recognition effect of Chinese cabbage plants is optimal at a flight altitude of 30 m to 50 m. At this altitude, the best opening operation window is 5 × 5, which can effectively reduce the number of missed and false identifications, and the difference between the recognition results and the actual values is also smaller, thus achieving the highest recognition accuracy. Therefore, within this flight altitude range, the recognition effect of Chinese cabbage in the drone imagery collected is the most ideal.

4.4. Shortcomings and Prospects

(1)
In this study, the cultivation and management of Chinese cabbage were relatively standardized. Future research could focus on more complex field management environments, including identification needs under seasonal changes and climate impacts, to enhance the applicability of the proposed methods. Additionally, it is recommended that recognition studies be conducted in multi-crop environments beyond Chinese cabbage, as this would help to improve the universality of this method and provide reliable data support for a broader range of agricultural applications.
(2)
The background planting structure of the study area was relatively simple, lacking the interference of complex features and thus resulting in fewer interference factors during recognition. At the same time, the data used only represent a single growth period for the crop. In subsequent research, attention should be paid to recognition and extraction across multiple growth periods, as well as applications in complex planting structures and diverse background environments. Furthermore, methods to improve plant separability in cases of overlapping leaves of multiple Chinese cabbage plants need to be further explored.
(3)
The exploration of combining neural network models (such as YOLO) with this method will help to optimize the recognition efficiency, enabling real-time detection in video streams or high-frequency monitoring and thereby enhancing the applicability and real-time capabilities of this method.

5. Conclusions

This study utilized visible light imagery captured by drones at various flight altitudes (20 m, 30 m, 40 m, 50 m, 60 m, and 70 m) and selected four commonly used color indices for crop recognition. We proposed a multi-scale algorithm that combines image frequency domain filtering, threshold segmentation, and morphological filtering, among other image processing techniques. The study particularly explored the impact of different flight altitudes on the accuracy of Chinese cabbage recognition and work efficiency. The main conclusions of the study are as follows:
(1)
Recognition accuracy of multi-scale images: Multi-scale images at different resolutions exhibit varying recognition accuracies, and optimizing plant recognition is feasible. For instance, under conditions of overlapping leaves, the Excess Green Index demonstrates better separation capability. Different spatial resolutions require different types of filtering and convolution kernel sizes, which also affects the effectiveness of threshold segmentation. As the spatial resolution of the imagery decreases, the optimal window size for morphological filtering also decreases. Simultaneously, as the window size increases, the number of missed recognitions in images of different spatial resolutions increases, while the number of misidentifications decreases.
(2)
Optimal flight altitude: Flight altitudes between 30 m and 50 m can achieve better recognition results. This altitude ensures sufficient resolution to meet recognition demands while maintaining a high operational efficiency within the field of view, balancing recognition accuracy and coverage efficiency.
(3)
Future research directions: The data collection conditions in this study are relatively ideal, and the impact of complex environments on recognition accuracy has not been considered. Future research could test and enhance this method in more complex backgrounds, during different growth stages, and across larger planting areas, while exploring how to ensure consistency in recognition effects under different hardware conditions.

Author Contributions

Conceptualization, X.D. and D.H.; methodology, X.D.; software, X.D.; validation, D.H.; formal analysis, X.D.; data curation, X.D. and D.H.; writing—original draft preparation, X.D.; writing—review and editing, X.D., Z.Z. and D.H.; visualization, X.D.; supervision, D.H. and Z.Z.; project administration, Z.Z.; funding acquisition, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guizhou Provincial Key Technology R&D Program: Qiankehe [2023] General No. 211; The Guizhou Provincial Basic Research Program (Natural Science): Qiankehe [2021] General No. 194; The National Natural Science Foundation of China: 41661088; The Guizhou Province High-level Innovative Talent Training Plan “Hundred” Level Talents: Qiankehe Platform Talents [2016] 5674.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. National Bureau of Statistics. Available online: https://www.stats.gov.cn/sj/ (accessed on 6 October 2024).
  2. Pathak, H.S.; Brown, P.; Best, T. A Systematic Literature Review of the Factors Affecting the Precision Agriculture Adoption Process. Precis. Agric. 2019, 20, 1292–1316. [Google Scholar] [CrossRef]
  3. Pierce, F.J.; Nowak, P. Aspects of Precision Agriculture. In Advances in Agronomy; Elsevier: Amsterdam, The Netherlands, 1999; Volume 67, pp. 1–85. [Google Scholar]
  4. Chen, X.; Shi, Y.X.; Li, X. Research on Recognition Method of Chinese Cabbage Growth Periods Based on Swin Transformer and Transfer Learning. Appl. Eng. Agric. 2023, 39, 381–390. [Google Scholar] [CrossRef]
  5. Zheng, J.J.; Lan, Y.B.; Xiong, W.J.; Li, S.; Yang, R.N.; Dong, X. Identification method of Pak choi pests and diseases based on improved YOLOv5s. Trans. Chin. Soc. Agric. Eng. 2024, 40, 124–133. [Google Scholar]
  6. Asano, M.; Onishi, K.; Fukao, T. Robust Cabbage Recognition and Automatic Harvesting under Environmental Changes. Adv. Robot. 2023, 37, 960–969. [Google Scholar] [CrossRef]
  7. Jiang, P.; Qi, A.; Zhong, J.; Luo, Y.H.; Hu, W.W.; Shi, Y.X.; Liu, T. Field Cabbage Detection and Positioning System Based on Improved YOLOv8n. Plant Methods 2024, 20, 96. [Google Scholar] [CrossRef]
  8. Aguilar-Ariza, A.; Ishii, M.; Miyazaki, T.; Saito, A.; Khaing, H.P.; Phoo, H.W.; Kondo, T.; Fujiwara, T.; Guo, W.; Kamiya, T. UAV-Based Individual Chinese Cabbage Weight Prediction Using Multi-Temporal Data. Sci. Rep. 2023, 13, 20122. [Google Scholar] [CrossRef]
  9. Luo, W.; Du, Y.Z.; Zhang, H.L. Discrimination of Varieties of Cabbage with Near Infrared Spectra Based on Principal Component Analysis and Successive Projections Algorithm. Spectrosc. Spectr. Anal. 2016, 36, 3536–3541. [Google Scholar]
  10. Jiang, A.; Ding, L.; Song, E.Z.; Shao, G.C.; Wang, Y.; Lu, J. UAV-based Multispectral Multi-temporal Chard SPAD Monitoring Study. China Rural. Water Hydropower 2024, 6, 193–202. [Google Scholar]
  11. Dong, M.; Zhang, Y.; Zhang, X.X.; Zhu, J.W.; Yang, W.P. The research of extracting the characteristic wavebands for pakchoi applying for ABS algorithm based on the LCTF imaging techniques. Opt. Instrum. 2016, 38, 243–247. [Google Scholar]
  12. Ji, J.C.; Zhao, Y.; Zou, X.J.; Xuan, K.F.; Wang, W.P.; Liu, J.L.; Li, X.P. Advancement in Application of UAV Remote Sensing to Monitoring of Farmlands. Acta Pedol. Sin. 2019, 56, 773–784. [Google Scholar]
  13. Yao, L.M.; Li, M.; Xie, J.X.; Sun, X. Application of UAV Remote Sensing Technology in Crop Breeding. Hunan Agric. Sci. 2020, 11, 108–112. [Google Scholar]
  14. Zhang, H.D.; Wang, L.Q.; Tian, T.; Yin, J.H. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  15. Eladl, S.G.; Haikal, A.Y.; Saafan, M.M.; ZainEldin, H.Y. A Proposed Plant Classification Framework for Smart Agricultural Applications Using UAV Images and Artificial Intelligence Techniques. Alex. Eng. J. 2024, 109, 466–481. [Google Scholar] [CrossRef]
  16. Anderegg, J.; Tschurr, F.; Kirchgessner, N.; Treier, S.; Graf, L.V.; Schmucki, M.; Caflisch, N.; Minguely, C.; Streit, B.; Walter, A. Pixel to Practice: Multi-Scale Image Data for Calibrating Remote-Sensing-Based Winter Wheat Monitoring Methods. Sci. Data 2024, 11, 1033. [Google Scholar] [CrossRef] [PubMed]
  17. Li, C.M.; Teng, X.; Tan, Y.; Zhang, Y.; Zhang, H.C.; Xiao, D.; Luo, S.J. Spatio-Temporal Mapping of Leaf Area Index in Rice: Spectral Indices and Multi-Scale Texture Comparison Derived from Different Sensors. Front. Plant Sci. 2024, 15, 1445490. [Google Scholar] [CrossRef]
  18. Yang, G.F.; He, Y.; Feng, X.P.; Li, X.Y.; Zhang, J.N.; Yu, Z.Y. Methods and New Research Progress of Remote Sensing Monitoring of Crop Disease and Pest Stress Using Unmanned Aerial Vehicle. Smart Agric. 2022, 4, 1–16. [Google Scholar]
  19. Song, Y.; Chen, B.; Wang, Q.; Su, W.; Sun, L.X.; Zhao, J.; Han, H.Y.; Wang, F.Y. Research advances of crop diseases and insect pests monitoring by unmanned aerial vehicle remote sensing. Cotton Sci. 2021, 33, 291–306. [Google Scholar]
  20. Zhang, X.; Zhang, F.; Qi, Y.X.; Deng, L.F.; Wang, X.L.; Yang, S.T. New Research Methods for Vegetation Information Extraction Based on Visible Light Remote Sensing Images from an Unmanned Aerial Vehicle (UAV). Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 215–226. [Google Scholar] [CrossRef]
  21. Chen, J.F.; Zhao, L.; Yue, Y.K.; Fu, H.Y.; Xu, M.Z.; Zhang, L.; Yang, R.F.; Cui, G.X.; She, W. Research Progress of UAV Multispectral Remote Sensing Application in Crop Growth Monitoring. Crop Res. 2022, 36, 391–395. [Google Scholar]
  22. Huang, D.H.; Zhou, Z.F.; Wu, Y.; Zhu, M.; Yin, L.J.; Cui, L. Identification of Yam Plants in Karst Plateau Hill Basin Based on Visible Light Images of an Unmanned Aerial Vehicle. Trop. Geogr. 2019, 39, 12. [Google Scholar]
  23. Zhou, Z.F.; Peng, R.W.; Li, R.S.; Li, Y.Q.; Huang, D.H.; Zhu, M. Remote Sensing Identification and Rapid Yield Estimation of Pitaya Plants in Different Karst Mountainous Complex Habitats. Agriculture 2023, 13, 1742. [Google Scholar] [CrossRef]
  24. Huang, D.H.; Zhou, Z.F.; Zhang, Z.Z.; Zhu, M.; Yin, L.J.; Peng, R.W.; Zhang, Y.; Zhang, W.H. Recognition and Counting of Pitaya Trees in Karst Mountain Environment Based on Unmanned Aerial Vehicle RGB Images. J. Appl. Rem. Sens. 2021, 15, 0424021–04240224. [Google Scholar] [CrossRef]
  25. Li, Q.X.; Yan, L.H.; Zhou, Z.F.; Huang, D.H.; Xiao, D.N.; Huang, Y.Y. Study on Tobacco Plant Cross-Level Recognition in Complex Habitats in Karst Mountainous Areas Based on the U-Net Model. J. Indian Soc. Remote Sens. 2024, 52, 2099–2114. [Google Scholar] [CrossRef]
  26. Huang, Y.Y.; Yan, L.H.; Zhou, Z.F.; Huang, D.H.; Li, Q.X.; Zhang, F.X.M.; Cai, L. Complex Habitat Deconstruction and Low-Altitude Remote Sensing Recognition of Tobacco Cultivation on Karst Mountainous. Agriculture 2024, 14, 411. [Google Scholar] [CrossRef]
  27. Yang, H.B.; Lan, Y.B.; Lu, L.Q.; Gong, D.C.; Miao, J.C.; Zhao, J. New Method for Cotton Fractional Vegetation Cover Extraction Based on UAV RGB Images. Int. J. Agric. Biol. Eng. 2022, 15, 172–180. [Google Scholar] [CrossRef]
  28. Böhler, J.E.; Schaepman, M.E.; Kneubühler, M. Crop Classification in a Heterogeneous Arable Landscape Using Uncalibrated UAV Data. Remote Sens. 2018, 10, 1282. [Google Scholar] [CrossRef]
  29. Kim, D.W.; Yun, H.S.; Jeong, S.J.; Kwon, Y.S.; Kim, S.G.; Lee, W.S.; Kim, H.J. Modeling and Testing of Growth Status for Chinese Cabbage and White Radish with UAV-Based RGB Imagery. Remote Sens. 2018, 10, 563. [Google Scholar] [CrossRef]
  30. Tian, T.; Zhang, Q. Application Research Progress of Unmanned Aerial Vehicle Remote Sensing in Crop Monitoring. Crops 2020, 5, 1–8. [Google Scholar]
  31. Li, D.R.; Li, M. Research Advance and Application Prospect of Unmanned Aerial Vehicle Remote Sensing System. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 505–513+540. [Google Scholar]
  32. Lu, T.Y.; Gao, M.X.; Wang, L. Crop Classification in High-Resolution Remote Sensing Images Based on Multi-Scale Feature Fusion Semantic Segmentation Model. Front. Plant Sci. 2023, 14, 1196634. [Google Scholar] [CrossRef]
  33. Yang, G.X.; Li, X.R.; Liu, P.Z.; Yao, X.; Zhu, Y.; Cao, W.X.; Cheng, T. Automated In-Season Mapping of Winter Wheat in China with Training Data Generation and Model Transfer. ISPRS J. Photogramm. Remote Sens. 2023, 202, 422–438. [Google Scholar] [CrossRef]
  34. Liu, Z.; Liu, D.; Zhu, D.; Zhang, L.; Tong, L. Review on Crop Type Fine Identification and Automatic Mapping Using Remote Sensing. Trans. Chin. Soc. Agric. Mach. 2018, 49. [Google Scholar] [CrossRef]
  35. Ma, Z.Y.; Wang, G.; Yao, J.R.; Huang, D.Y.; Tan, H.W.; Jia, H.L.; Zou, Z.B. An Improved U-Net Model Based on Multi-Scale Input and Attention Mechanism: Application for Recognition of Chinese Cabbage and Weed. Sustainability 2023, 15, 5764. [Google Scholar] [CrossRef]
  36. Zhang, C.; Liu, J.J.; Su, W.; Qiao, M.; Yang, J.Y.; Zhu, D.H. Optimal scale of crop classification using unmanned aerial vehicle remote sensing imagery based on wavelet packet transform. Trans. Chin. Soc. Agric. Eng. 2016, 32, 95–101. [Google Scholar]
  37. Mao, W.H.; Wang, Y.M.; Wang, Y.Q. Real-Time Detection of Between-Row Weeds Using Machine Vision. In Proceedings of the 2003 ASAE Annual Meeting, Las Vegas, NV, USA, 27–30 July 2003; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2003. [Google Scholar]
  38. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially Located Platform and Aerial Photography for Documentation of Grazing Impacts on Wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  39. Meyer, G.E.; Neto, J.C. Verification of Color Vegetation Indices for Automated Crop Imaging Applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  40. Woebbecke, D.M.; Meyer, G.E.; Bargen, K.V.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  41. Lv, M. Research of the improved parallel line detection algorithm based on Hough transform. Image Process. Multimed. Technol. 2010, 29, 27–29. [Google Scholar]
  42. Torres-Sánchez, J.; Peña, J.M.; De Castro, A.I.; López-Granados, F. Multi-Temporal Mapping of the Vegetation Fraction in Early-Season Wheat Fields Using Images from UAV. Comput. Electron. Agric. 2014, 103, 104–113. [Google Scholar] [CrossRef]
  43. Tavares, J.M.R.S.; Jorge, R.M.N. Advances in Computational Vision and Medical Image Processing; Springer: Dordrecht, The Netherlands, 2009; Volume 13. [Google Scholar]
  44. Sahoo, P.K.; Soltani, S.; Wong, A.K.C. A Survey of Thresholding Techniques. Comput. Vis. Graph. Image Process. 1988, 41, 233–260. [Google Scholar] [CrossRef]
  45. Zheng, X.L.; Zhang, X.F.; Cheng, J.Y.; Ren, X. Using the multispectral image data acquired by unmanned aerial vehicle to build an estimation model of the number of seedling stage cotton plants. J. Image Graph. 2020, 25, 520–534. [Google Scholar]
  46. Zhou, J.F.; Khot, L.R.; Boydston, R.A.; Miklas, P.N.; Porter, L.D. Low Altitude Remote Sensing Technologies for Crop Stress Monitoring: A Case Study on Spatial and Temporal Monitoring of Irrigated Pinto Bean. Precis. Agric. 2018, 19, 555–569. [Google Scholar] [CrossRef]
  47. Ding, L.L.; Li, Q.Z.; Du, X.; Tian, Y.C.; Yuan, C. Vegetation extraction method based on color indices from UAV images. Remote Sens. Land Resour. 2016, 28, 78–86. [Google Scholar]
  48. Li, J.Y.; Zhang, W.; Kang, Y.; Xu, X.Y.; Qi, L.Q.; Shi, W.Q. Research on soybean seedling number estimation based on UAV remote sensing technology. J. Chin. Agric. Mech. 2022, 43, 83–89. [Google Scholar]
  49. Huang, D.H.; Zhou, Z.F.; Zhang, Z.Z.; Zhu, M.; Peng, R.W.; Zhang, Y.; Li, Q.X.; Xiao, D.N.; Hu, L.W. Extraction of Agricultural Plastic Film Mulching in Karst Fragmented Arable Lands Based on Unmanned Aerial Vehicle Visible Light Remote Sensing. J. Appl. Rem. Sens. 2022, 16, 036511. [Google Scholar] [CrossRef]
  50. Cui, S.Y.; Yan, Q.; Reinartz, P. Complex Building Description and Extraction Based on Hough Transformation and Cycle Detection. Remote Sens. Lett. 2012, 3, 151–159. [Google Scholar] [CrossRef]
  51. Chen, M.T.; Yan, D.M.; Wang, G. Algorithm of high-resolution remote sensing image matching based on Harris corner and SlFT descriptor. J. Image Graph. 2012, 17, 1453–1459. [Google Scholar]
  52. Tao, C.; Tan, Y.H.; Cai, H.J.; Du, B.; Tian, J.W. Object-oriented Method of Hierarchical Urban Building Extraction from High-resolution Remote-Sensing lmagery. Acta Geod. Cartogr. Sin. 2010, 39, 39–45. [Google Scholar]
  53. Kumar, T.; Verma, K. A Theory Based on Conversion of RGB Image to Gray Image. Int. J. Comput. Appl. 2010, 7, 5–12. [Google Scholar] [CrossRef]
  54. Zheng, S.Y.; Hai, Y.; He, M.Q.; Wang, J.X. Construction of Vegetation Index in Visible Light Band of GF-6 Image with Higher Discrimination. Spectrosc. Spectr. Anal. 2023, 43, 3509–3517. [Google Scholar]
  55. Liu, S.; Zhu, H. Object-oriented land use classification based on ultra-high resolution images taken by unmanned aerial vehicle. Trans. Chin. Soc. Agric. Eng. 2020, 36, 87–94. [Google Scholar]
  56. Anderson, K.; Gaston, K.J. Lightweight Unmanned Aerial Vehicles Will Revolutionize Spatial Ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef]
  57. Colomina, I.; Molina, P. Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  58. Elhadary, A.; Rabah, M.; Taha, E.G.M.A. The Influence of Flight Height and Overlap on UAV Imagery over Featureless Surfaces and Constructing Formulas Predicting the Geometrical Accuracy. NRIAG J. Astron. Geophys. 2022, 11, 210–223. [Google Scholar] [CrossRef]
  59. He, Y.; Du, X.Y.; Zheng, L.Y.; Zhu, J.P.; Cen, H.Y.; Xu, L.J. Effects of UAV flight height on estimated fractional vegetation cover and vegetation index. Trans. Chin. Soc. Agric. Eng. 2022, 38, 63–72. [Google Scholar]
  60. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  61. Theodoridis, S.; Koutroumbas, K. Pattern Recognition, 4th ed.; Publishing House of Electronics Industry: Beijing, China, 2016. [Google Scholar]
  62. Kittler, J.; Illingworth, J. On Threshold Selection Using Clustering Criteria. IEEE Trans. Syst. Man Cybern. 1985, 5, 652–655. [Google Scholar] [CrossRef]
  63. Yang, Y.; Zhao, Q.H. Multi-level threshold segmentation of high-resolution panchromatic remote sensing imagery. Opt. Precis. Eng. 2020, 28, 2370–2383. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the study area. Different growth conditions: (a) uneven growth, (b) generally poor growth, (c) generally good growth. Different background characteristics: (d) covered with white plastic mulch (with and without water adhesion), (e) plants with multiple features (including connected plant and yellow and green leaves).
Figure 1. Schematic diagram of the study area. Different growth conditions: (a) uneven growth, (b) generally poor growth, (c) generally good growth. Different background characteristics: (d) covered with white plastic mulch (with and without water adhesion), (e) plants with multiple features (including connected plant and yellow and green leaves).
Agriculture 14 01871 g001
Figure 2. Research technology roadmap.
Figure 2. Research technology roadmap.
Agriculture 14 01871 g002
Figure 3. Spectral curve of the image. In this figure, the three curves representing red, green, and blue correspond to the red, green, and blue bands, respectively. The X-axis represents the sampling distance, which refers to the data measured along a line segment (profile line) drawn on the image; the Y-axis represents the spectral value at each point along the profile line. Note: (a) plants with green and yellow leaves; (b) the soil spectral curve; (c) the spectral curve of the mulch film background; (d) plant with green leaves and soil relationship; (e) plant with yellow leaves and soil relationship; (f) plant and mulch film.
Figure 3. Spectral curve of the image. In this figure, the three curves representing red, green, and blue correspond to the red, green, and blue bands, respectively. The X-axis represents the sampling distance, which refers to the data measured along a line segment (profile line) drawn on the image; the Y-axis represents the spectral value at each point along the profile line. Note: (a) plants with green and yellow leaves; (b) the soil spectral curve; (c) the spectral curve of the mulch film background; (d) plant with green leaves and soil relationship; (e) plant with yellow leaves and soil relationship; (f) plant and mulch film.
Agriculture 14 01871 g003
Figure 4. Comparison of different morphological filtering results. Note: (b) represents the “Otsu” processing result, while (a,c) represent the results of “closing” and “opening” operations based on Otsu, respectively. Agriculture 14 01871 i052 The red box indicates the noise of the image; Agriculture 14 01871 i053 the black box indicates the conjoined area between plants.
Figure 4. Comparison of different morphological filtering results. Note: (b) represents the “Otsu” processing result, while (a,c) represent the results of “closing” and “opening” operations based on Otsu, respectively. Agriculture 14 01871 i052 The red box indicates the noise of the image; Agriculture 14 01871 i053 the black box indicates the conjoined area between plants.
Agriculture 14 01871 g004
Table 1. Calculated color index of Chinese cabbage plant images.
Table 1. Calculated color index of Chinese cabbage plant images.
TypeGrowth SituationImageExGRGRIGLIExG-ExR
ISingle plantAgriculture 14 01871 i001Agriculture 14 01871 i002Agriculture 14 01871 i003Agriculture 14 01871 i004Agriculture 14 01871 i005
IITwo overlapping leaves Agriculture 14 01871 i006Agriculture 14 01871 i007Agriculture 14 01871 i008Agriculture 14 01871 i009Agriculture 14 01871 i010
IIIThree overlapping leavesAgriculture 14 01871 i011Agriculture 14 01871 i012Agriculture 14 01871 i013Agriculture 14 01871 i014Agriculture 14 01871 i015
Table 2. Comparison of different frequency domain filtering and their Otsu results.
Table 2. Comparison of different frequency domain filtering and their Otsu results.
Flight AltitudeExGOtsu (ExG)ExG→HPOtsu (HP)ExG→GHPOtsu (GHP)
20 mAgriculture 14 01871 i016Agriculture 14 01871 i017Agriculture 14 01871 i018
19 × 19
Agriculture 14 01871 i019Agriculture 14 01871 i020
19 × 19
Agriculture 14 01871 i021
30 mAgriculture 14 01871 i022Agriculture 14 01871 i023Agriculture 14 01871 i024
21 × 21
Agriculture 14 01871 i025Agriculture 14 01871 i026
21 × 21
Agriculture 14 01871 i027
40 mAgriculture 14 01871 i028Agriculture 14 01871 i029Agriculture 14 01871 i030
21 × 21
Agriculture 14 01871 i031Agriculture 14 01871 i032
21 × 21
Agriculture 14 01871 i033
50 mAgriculture 14 01871 i034Agriculture 14 01871 i035Agriculture 14 01871 i036
19 × 19
Agriculture 14 01871 i037Agriculture 14 01871 i038
19 × 19
Agriculture 14 01871 i039
60 mAgriculture 14 01871 i040Agriculture 14 01871 i041Agriculture 14 01871 i042
65 × 65
Agriculture 14 01871 i043Agriculture 14 01871 i044
65 × 65
Agriculture 14 01871 i045
70 mAgriculture 14 01871 i046Agriculture 14 01871 i047Agriculture 14 01871 i048
67 × 67
Agriculture 14 01871 i049Agriculture 14 01871 i050
67 × 67
Agriculture 14 01871 i051
Note: HP, high-pass processing; GHP, Gaussian high-pass processing. In this table, “19 × 19” and similar notations refer to the optimal convolution kernel size.
Table 3. Precision comparison of “opening” processing results at different flight altitudes.
Table 3. Precision comparison of “opening” processing results at different flight altitudes.
Flight
Altitude (m)
Window SizeBF (%)MF (%)DP (%)QP (%)
203 × 36.360.2199.7993.83
5 × 55.930.4799.5393.99
7 × 75.171.4298.6093.82
9 × 92.623.0097.0894.67
303 × 32.480.1499.8697.44
5 × 50.850.5299.4898.65
7 × 70.122.6997.3897.26
9 × 90.016.2394.1394.12
403 × 30.570.2799.7399.17
5 × 50.071.3598.6798.60
7 × 70.005.1595.1095.10
9 × 90.0018.7584.2184.21
503 × 31.010.0999.9198.91
5 × 50.071.7098.3398.27
7 × 70.0220.6582.8882.87
9 × 90.00394.4420.2220.22
603 × 30.470.1399.8799.40
5 × 50.002.6497.4397.43
7 × 70.0031.5576.0276.02
9 × 90.00609.3814.1014.10
703 × 30.280.6199.4099.12
5 × 50.016.1694.1994.18
7 × 70.0078.5956.0056.00
9 × 90.002932.033.303.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, X.; Zhou, Z.; Huang, D. Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants. Agriculture 2024, 14, 1871. https://doi.org/10.3390/agriculture14111871

AMA Style

Du X, Zhou Z, Huang D. Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants. Agriculture. 2024; 14(11):1871. https://doi.org/10.3390/agriculture14111871

Chicago/Turabian Style

Du, Xiandan, Zhongfa Zhou, and Denghong Huang. 2024. "Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants" Agriculture 14, no. 11: 1871. https://doi.org/10.3390/agriculture14111871

APA Style

Du, X., Zhou, Z., & Huang, D. (2024). Influence of Spatial Scale Effect on UAV Remote Sensing Accuracy in Identifying Chinese Cabbage (Brassica rapa subsp. Pekinensis) Plants. Agriculture, 14(11), 1871. https://doi.org/10.3390/agriculture14111871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop