Next Article in Journal
In Vitro Plant Regeneration of Sulla coronaria from Floral Explants as a Biotechnological Tool for Plant Breeding
Previous Article in Journal
Research on Path Planning of Agricultural UAV Based on Improved Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification

by
Zuojun Zheng
1,2,
Jianghao Yuan
1,2,3,
Wei Yao
1,
Paul Kwan
4,
Hongxun Yao
5,
Qingzhi Liu
6 and
Leifeng Guo
2,*
1
College of Information Science and Technology, Hebei Agricultural University, Baoding 071001, China
2
Institute of Agricultural Information, Chinese Academy of Agricultural Sciences, Beijing 100081, China
3
Academy of National Food and Strategic Reserves Administration, Beijing 100039, China
4
School of Engineering and Technology, CQUniversity Brisbane, 160 Ann St, Brisbane City, QLD 4000, Australia
5
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
6
Information Technology Group, Wageningen University and Research, 6700 HB Wageningen, The Netherlands
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(11), 2670; https://doi.org/10.3390/agronomy14112670
Submission received: 5 October 2024 / Revised: 29 October 2024 / Accepted: 8 November 2024 / Published: 13 November 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
The sustainable development of agriculture is closely related to the adoption of precision agriculture techniques, and accurate crop classification is a fundamental aspect of this approach. This study explores the application of machine learning techniques to crop classification by integrating RGB images and multispectral data acquired by UAVs. The study focused on five crops: rice, soybean, red bean, wheat, and corn. To improve classification accuracy, the researchers extracted three key feature sets: band values and vegetation indices, texture features extracted from a grey-scale co-occurrence matrix, and shape features. These features were combined with five machine learning models: random forest (RF), support vector machine (SVM), k-nearest neighbour (KNN) based, classification and regression tree (CART) and artificial neural network (ANN). The results show that the Random Forest model consistently outperforms the other models, with an overall accuracy (OA) of over 97% and a significantly higher Kappa coefficient. Fusion of RGB images and multispectral data improved the accuracy by 1–4% compared to using a single data source. Our feature importance analysis showed that band values and vegetation indices had the greatest impact on classification results. This study provides a comprehensive analysis from feature extraction to model evaluation, identifying the optimal combination of features to improve crop classification and providing valuable insights for advancing precision agriculture through data fusion and machine learning techniques.

1. Introduction

The sustainable development of agriculture is a prerequisite for the realization of precision agriculture. Accurate crop identification and classification serve as foundational steps in this process, as they enable advanced solutions to key challenges, such as plant counting, weed detection, and crop health monitoring. Agricultural ecosystems are highly susceptible to abnormal weather events and climate-related disasters, such as droughts and floods, making regular crop monitoring and yield prediction particularly important [1]. Traditionally, agricultural experts have relied on field surveys to determine the distribution of different crops, but field surveys are time-consuming and labor-intensive. With the advancement of remote-sensing technology, however, there has been growing interest in its application to crop monitoring and thematic mapping within the agricultural sector, as it can regularly provide localized information [2].
By utilizing satellite remote-sensing imagery and combining traditional machine learning with deep learning-based semantic segmentation methods, researchers have achieved remarkable success in farmland object classification. For example, Al Awar et al. [3] conducted crop-classification research in the Bekaa Valley of Lebanon, focusing on wheat and potatoes, using Sentinel-1 and Sentinel-2 satellite data. They applied support vector machines (SVMs), random forests (RFs), classification and regression trees (CARTs), and backpropagation networks (BPNs), achieving an overall accuracy (OA) of over 95%. Similarly, Song et al. [4] studied a crop-growing region in Gaomi City, Shandong Province, using panchromatic and multispectral images from the high-resolution Gaofen-2 satellite. They applied an improved multi-temporal spatial segmentation network (MSSN) to classify farmland objects. Their experimental results showed that the model achieved a strong pixel accuracy (PA) of 95%, an F1 score of 0.92, and an Intersection over Union (IoU) of 0.93 on the test set.
In addition to classification, remote-sensing technology has wide applications in crop health monitoring, yield estimation, land cover-based crop classification, and drought monitoring [5,6,7,8]. However, crop classification presents challenges due to the similarities in texture and color among early-stage crops. While satellite remote-sensing data are freely available, their low resolution, along with the impact of atmospheric particles and cloud cover—especially when cloud coverage exceeds 90%—significantly limits classification accuracy. In contrast, unmanned aerial vehicle (UAV) remote sensing offers high flexibility, short revisit periods, minimal environmental impact, and ease of acquiring small-scale agricultural remote-sensing data [9]. UAVs can fly at low altitudes, delivering high-spatial-resolution imaging, which makes them highly promising for precision agriculture applications [10,11]. By integrating various sources of remote-sensing data, precision agriculture enables farmers to make informed and timely agronomic decisions [12].
The development of automated methods for UAV applications in agriculture holds great potential, particularly in accurately identifying crop locations using high-resolution imagery. For instance, Wan et al. [13] achieved a high classification accuracy, r2 = 0.89, based on visible and multispectral images acquired by UAVs combined with the K-means method to classify oilseed rape field images. Li et al. [14] used a semi-Gaussian fitting method to estimate maize crop coverage from UAV imagery. Similarly, Marcaccio et al. [15] classified major vegetation types using high-resolution synthetic UAV images, significantly improving mapping accuracy. Yu et al. [16] introduced compressed sensing technology and proposed an integrated compressed sensing image-fusion algorithm, which was applied to the fusion of UAV and Synthetic Aperture Radar (SAR) images, yielding positive results. Torres-Sánchez et al. [17] achieved a 90% crop-classification accuracy by utilizing dual sensors on UAVs and combining them with an object-based automatic thresholding algorithm. Additionally, Shackelford et al. [18] reduced classification fragmentation using an object-oriented classification method, further improving classification accuracy.
With the increase in remote-sensing data and the reduction in decision-making time, developing accurate and efficient classification algorithms has become critical [19]. Researchers have developed various machine-learning algorithms for crop classification, among which artificial neural networks, decision trees, random forests (RFs), and support vector machines are the most commonly used. The random forest algorithm is widely applied due to its speed and robustness [20,21]. Studies have shown that the RF classifier is less sensitive to sample size, yielding significant classification results, and particularly suitable for high-spatial-resolution drone-image data, making it an ideal choice for agricultural plot mapping [22,23].
Vegetation indices are key features in solving crop-classification problems. Peñá et al. [24] developed a method called object-based crop identification and mapping based on several vegetation indices (VIs) and texture features derived from visible, near-infrared, and short-wave infrared (SWIR) bands, and the results showed that spectral variables based on the vegetation indices contributed to the model by about 90%. Duke et al. [25] generated 11 agriculture-related vegetation indices based on UAV multispectral images, combining canopy height model (CHM) and machine-learning classification algorithms with an overall accuracy of more than 93%. Zhang et al. [26] proposed a new green–red vegetation index (NGRVI) based on unmanned aerial vehicle (UAV) visible-light images. The experimental results show that NGRVI based on UAV visible-light images can accurately extract vegetation information in arid and semi-arid areas, and the extraction accuracy can reach more than 90%. Barrero et al. [27] extracted the Normalized Difference Vegetation Index (NDVI) and the Normalized Green–Red Difference Index (NGRDI) based on remote-sensing images of UAVs, combined with a neural network (NN) detection system to identify weeds with good results. Yucel Cimtay et al. [28] created a new vegetation index algorithm based on the short-wavelength infrared (SWIR) band in order to detect vegetation. The test results showed that soil and water areas, in addition to vegetation areas, in the hyperspectral images of the SWIR band could be successfully detected, and the average scores of recall and precision were 98.9% and 98.5%, respectively.
In recent years, data-fusion techniques have gained increasing attention for their ability to enhance classification accuracy. By integrating data from different sensors, the dimensionality of information can be enriched, thereby improving classification accuracy. For instance, Fan et al. [29] extracted ground-cover and plant-height information for potatoes using UAV hyperspectral and RGB images and constructed four fusion-feature parameters by incorporating green edge parameters. Their results showed that the R2 values of the optimal fusion parameter model were greater than 0.2 for all growth stages. Ma et al. [30] improved model accuracy by fusing RGB and multispectral images, increasing the R2 value from 0.47 to 0.63, significantly enhancing model performance. Bauer et al. [31] developed a classification model for sugar beet rust by combining visible and multispectral remote-sensing data, further improving classification accuracy. These studies demonstrate that fusing multi-source data can significantly enhance model robustness and performance.
Although existing studies have made some progress using visible and multispectral data obtained from UAVs for crop classification, challenges remain in achieving optimal accuracy. Effectively integrating multi-source data to enhance classification performance is still an unresolved issue. Therefore, the application of UAV-based visible and multispectral data fusion technology for crop classification holds significant research value. This paper proposes a crop-classification method based on the fusion of visible and multispectral data from UAVs. First, visible and multispectral images of farmland are captured using UAVs. Next, data fusion techniques are applied to integrate the multispectral and visible data. Finally, the random forest algorithm is used to classify crops based on the fused data. This study aims to validate the improvement in classification accuracy through data fusion and to further explore the potential of the random forest algorithm in remote-sensing data classification.

2. Materials and Methods

2.1. Study Area

The study area is located in Suibin County, Hegang City, Heilongjiang Province, China (Figure 1). Within this region, two experimental farmlands were selected, specifically situated in the delta area at the confluence of the Heilong and Songhua Rivers (Latitude N, 47°35′37.74″; Longitude E, 132°0′40.54″). The climate is continental, cold and temperate, with an average temperature of about 15.5 °C from April to October. The area receives abundant rainfall, with an average annual precipitation of approximately 502.5 mm. The soil mainly consists of meadow soil and brown forest soil, characterized by fertile soil and lush vegetation. Agriculture is the dominant industry in the region, primarily involving the cultivation of rice, soybeans, corn, wheat, and some cash crops. The total output of grains and beans reached 380,000 tons in 2021, with a per capita income of CNY 2814. The region has over 6000 large agricultural machines, achieving a comprehensive mechanization level of over 96%.

2.2. UAV-Based Remote-Sensing Data Acquisition

Data collection for this study was conducted using a DJI M300 RTK industrial-grade surveying and inspection drone (Shenzhen DJI Innovative Technology Co., Ltd., Shenzhen, China), equipped with a DJI Zenmuse P1 camera (Shenzhen DJI Innovative Technology Co., Ltd., Shenzhen, China) and a Changguang Yuchen MS600 Pro multispectral sensor (Qingdao Changguang Yuchen Information Technology and Equipment Co., Ltd., Qingdao, China). The specific parameters of the drone and onboard equipment are detailed in Table 1. The total area of the data-collection zone was approximately 5000 mu (about 333 hectares), with primary crops including rice, soybean, corn, wheat, and adzuki bean. Poplar trees were planted as buffer strips between different crops, while poplar and coniferous trees were planted around the perimeter as windbreaks.
Visible and multispectral data were collected on 11 July 2023, when the UAV was flying at an altitude of 160 m. The phenological period in which each crop was located is shown in Figure 2. The flight parameters included a sidelap rate of 70%, a frontlap rate of 80%, and a flight speed of 17 m/s. Data collection took place under clear weather conditions, with ample sunlight and low wind speeds, ensuring optimal data quality.

2.3. Data Preprocessing

The drone was equipped with a DJI Zenmuse P1 camera and a Changguang Yucheng MS600 Pro lens to capture raw images in both visible-light and multispectral formats. The multispectral images underwent reflectance calibration using YusenseRef V3.0 software. Subsequently, the calibrated multispectral and the raw visible-light images were imported into Pix4Dmapper4.5.6 software for orthorectification, resulting in orthophotos. Using ArcMap 10.8, the excess portions of each orthophoto were cropped. The cropped visible-light images served as a reference, and 20 control points were manually marked using the image-registration feature in ENVI 5.6. The multispectral orthophoto was then aligned with the visible-light orthophoto to create the final image, as shown in Figure 3.

2.4. Research Methodology

Various features are extracted from the visible-light and multispectral data, including spectral, shape, and texture features. Spectral features are derived from the reflectance values of different bands in the multispectral data; shape features are extracted based on the geometric characteristics of the target area, such as area and perimeter; and texture features are calculated using methods such as the gray-level co-occurrence matrix (GLCM). These different types of features are merged into a comprehensive feature vector to serve as input for the classification model. The fused features are then classified using a random forest classifier. This classifier constructs multiple decision trees and employs a voting mechanism for classification, effectively handling high-dimensional data and reducing overfitting issues. The parameters of the classifier are adjusted through cross-validation to optimize model performance. Finally, crop-classification maps are generated based on predictions from the best-performing model, as illustrated in Figure 4.

2.4.1. Image Segmentation

Using eCognition 9.0 software, multi-scale segmentation was performed on the images. For the visible-light images, different segmentation scales were set within the range of 25 to 175, with intervals of 25. For the multispectral images, segmentation scales were established within the range from 500 to 1500, with intervals of 50. Corresponding shape factors and compactness parameters were also adjusted for optimal results, after which the images were segmented individually based on these configurations.

2.4.2. Feature Extraction

Using eCognition 9.0 software, three major categories of features were extracted from the samples for model training: spectral features, shape features, and texture features.
(1) Spectral Features (SFs)
The spectral features are composed of two parts. ① SPEC: This includes the standard deviation, mean, maximum difference between bands (max difference), and overall brightness of the pixel values in the visible-light image’s R, G, and B bands, as well as the multispectral image’s R, G, B, RE, and NIR bands. ② Vegetation Indices (INDEs): These indices assess the vegetation growth status by analyzing the reflectance or absorption characteristics of vegetation across different spectral bands. By combining multiple spectral bands, they provide a more comprehensive, stable, and intuitive evaluation of vegetation conditions, overcoming the limitations of single-band analysis. In this study, 24 vegetation indices suitable for the data were selected, including 10 indices derived from the R, G, and B bands and 14 indices that incorporate the RE or NIR bands. Detailed information on the selected indices is shown in Table 2.
(2) Geometric Features (GEOM)
In selecting geometric features, we excluded range-related features such as perimeter, area, volume, thickness, and width, as these can reduce classification accuracy [53]. These features are more applicable to objects with regular shapes, like buildings [54]. Instead, we focused on shape features, which are commonly used in image processing, computer vision, and pattern recognition to describe and analyze the geometric shapes of objects. Shape features play a crucial role in image analysis and processing, effectively describing the geometric characteristics of objects, aiding in tasks such as object recognition, classification, and analysis. Compared to color and texture features, shape features are more robust; less affected by external factors, like lighting and color; and offer high stability and invariance. The shape features extracted in this study include asymmetry, border index, compactness, density, elliptic fit, main direction, radius of the largest enclosed ellipse, radius of the smallest enclosing ellipse, rectangular fit, roundness, and shape index
(3) Texture Features (GLCM)
Texture features describe the spatial patterns and local structural information of pixel distributions within an image. They can reveal the physical properties, surface structures, and organization of various objects in the image. Texture features extracted from both multispectral and visible-light images enhance the ability to capture image details, aiding in more accurate object recognition and classification. In this study, we employed the gray-level co-occurrence matrix (GLCM) [55] to extract comprehensive texture features from the R, G, and B bands of visible-light images and the R1, G1, B1, R_E, and NIR bands of multispectral images. The extracted features include homogeneity, contrast, dissimilarity, entropy, angular second moment, mean, standard deviation (StdDev), and correlation.

2.4.3. Experimental Protocol

To investigate the impact of different feature types (spectral, geometric, and texture features) on classification accuracy based on UAV imagery, and to compare the differences between visible-light and multispectral images, this study designed 21 classification schemes, as shown in Table 3. In this case, the vegetation indices extracted in schemes P1–P7 are the 10 RGB-based bands in the table. The vegetation indices extracted in schemes P8–P14 are all the vegetation indices in the table, all based on multispectral bands.
In schemes P15–P21, after extracting features for visible and multispectral images separately, we combined the feature vectors of the two via simple concatenation and then input them into the classification model for classification. This approach can combine information such as spectral features, texture features, and shape features from different data sources and make full use of the complementary properties of the two types of images: visible images provide rich spatial details and texture information, while multispectral images capture a wide range of spectral information, which helps to identify the physiological characteristics of crops. By fusing these different features together, the model can provide a more comprehensive understanding of the scene and improve the accuracy of classification and recognition.
Through field surveys and manual visual interpretation of UAV visible-light images, eight categories were identified: rice, wheat, soybean, red bean, corn, poplar, roads, and background. Based on the multi-scale segmentation results, training samples were randomly selected, as illustrated in Figure 5. The number of samples is shown in Table 4, which is only used for training. Subsequently, in this paper, a total of 5146 pixel points were randomly selected by ArcGIS3.0.1 software for the region where each category is located for accuracy evaluation and confusion matrix generation. The random point locations are shown in Figure 6. The number of random points corresponding to each category is shown in Table 4.

2.4.4. Machine-Learning Modeling

Among various machine-learning classification-learning algorithms, especially for UAV remote sensing-data processing, the artificial neural network (ANN) [56] is the most commonly used method. The random forest (RF) [57] algorithm is considered to be one of the most accurate classification methods. Support vector machine (SVM) [58] has become a commonly used algorithm for processing UAV remote-sensing data through its advantages of efficiently processing high-dimensional data, adapting to nonlinear features, and maintaining high accuracy under small sample conditions. The K-nearest neighbor (KNN) [59] classification algorithm is widely used in the task of vegetation classification of UAV remote-sensing data due to its simple and intuitive mechanism and its ability to efficiently deal with multi-class classification problems. The CART (classification and regression tree) [60] decision trees are favored for their efficient classification ability, the ease of model interpretation, and their ability to handle nonlinear data. They are particularly suitable for fast, preliminary classification and analysis tasks. In order to make a comparison and find a more suitable model for the data in this paper, all the above methods were used to explore the performance of crop classification based on UAV remote-sensing data. All the algorithms were implemented in R language-based packages, and other key parameters identified through cross-validation are shown in Table 5.

2.4.5. Accuracy Evaluation

To thoroughly evaluate the classification performance based on the random forest algorithm, this study utilized several evaluation metrics: user’s accuracy (UA), producer’s accuracy (PA), overall accuracy (OA), and Kappa Index of Agreement (KIA). These metrics assess the performance of the classification model from different perspectives, ensuring the reliability of the classification results. The accuracy evaluation methods are typically based on a confusion matrix, a common tool used to represent the relationship between the predicted results of a classification model and the actual categories.
The user precision indicates the proportion of pixels that actually belong to a class out of those predicted by the model to be in that class. It reflects the reliability of the classification result, i.e., how probable it is that the model’s prediction of this class is true. The formula is as follows:
U A i = n i i n p r e d , i
where
  • U A i , user accuracy for class i;
  • n i i , the number of pixels in the confusion matrix that actually belong to class i and are correctly classified as class i (diagonal elements);
  • n p r e d , i , the number of all pixels predicted to be of class i (i.e., the sum of all elements in column i of the confusion matrix).
Producer accuracy indicates the proportion of pixels that actually belong to a class that are correctly classified as being in that class. It reflects the completeness of the classification model, i.e., what proportion of samples in that class are correctly identified by the model. The formula is as follows:
P A i = n i i n true , i
where
  • P A i , producer accuracy of class i;
  • n i i , the number of pixels in the confusion matrix that actually belong to class i and are correctly classified as class i (diagonal elements);
  • n true , i , the number of all pixels that actually belong to class i (i.e., the sum of all elements in row i of the confusion matrix).
The overall classification accuracy (OA) indicates the proportion of correctly classified pixels or samples out of the total pixels or samples in all categories. It reflects the accuracy of the model for the entire dataset. The formula is as follows:
OA = i = 1 n   n i i N × 100 %
where
  • n i i , number of samples in the confusion matrix where category i actually belongs to category i and is correctly categorized (diagonal element in the confusion matrix);
  • n, total number of categories (number of categories categorized);
  • N, total number of samples, i.e., the sum of the number of samples from all categories (sum of all elements in the confusion matrix);
  • i = 1 n   n i i , the sum of the number of correctly categorized samples in all categories.
The Kappa Index of Agreement (KIA) measures the consistency between the observed classification results and random classification results, serving as a further supplement to overall accuracy by considering the impact of random guessing. The calculation formula for the Kappa coefficient is as follows:
K a p p a = P o P e 1 P e
where P o is the overall accuracy, defined as follows:
P o = i = 1 N   n i i i = 1 N   j = 1 N   n i j
and P e is the expected accuracy of the classifier under random conditions, calculated as follows:
P e = i = 1 N   j = 1 N   n i j × j = 1 N   n j i i = 1 N   j = 1 N   n i j 2
The Kappa coefficient is in the range of [−1, 1], where 1 indicates perfect agreement, 0 indicates agreement with random guessing, and negative values suggest classification performance worse than random guessing.

3. Results

3.1. Optimal Split Ratio

The segmentation results of visible and multispectral images are shown in Figure 7 and Figure 8. Through our comparative analysis, it was found that the setting of the segmentation scale is crucial to the effect of image segmentation. When the segmentation scale is too large, the image shows obvious under-segmentation phenomenon: each segmented object contains multiple feature types, leading to lower purity of the object and making it difficult to accurately express the features of various types of features. In contrast, when the segmentation scale is set too small, it leads to the fragmentation of the image objects, making it difficult to accurately match the actual boundaries of the features and, at the same time, increasing the difficulty of recognition and the computational burden of the subsequent processing. Therefore, a reasonable segmentation scale can not only improve the segmentation accuracy but also effectively control the computational complexity.
For visible-light images, the segmentation scale is in the range from 50 to 125, and the objects generated after segmentation are more consistent with the actual boundaries of the features, presenting a better segmentation effect. After further visual comparison and analysis of the multi-scale segmentation results, 75 is finally selected as the optimal segmentation scale, which is able to ensure the image details while effectively avoiding object fragmentation. In order to balance the shape and compactness of the segmented objects, the compactness and shape factors are set to 0.4 and 0.5, respectively.
For the segmentation of multispectral images, the optimal segmentation scale range is significantly larger than that of visible-light images. When the segmentation scale is between 700 and 1200, the segmented objects coincide with the actual boundaries of the features and can better reflect the characteristics of different features. It is worth noting that when the segmentation scale exceeds 1000, the size of the segmented objects basically remains unchanged, and the boundaries of the objects are clearer, and the heterogeneity between different objects is larger, which effectively distinguishes different categories of features. Based on the comparative analysis of the multi-scale segmentation results, it is finally determined that 1000 is the optimal segmentation scale for multispectral images, and the compactness and shape factors are also set to 0.4 and 0.5 to ensure that the segmented objects have regular shapes and accurate boundaries.

3.2. Classification Accuracy

In this study, by comparing the experimental results of multiple classification algorithms (Table 6), it was found that random forest (RF) and artificial neural network (ANN) had the best overall performance. Among them, RF has the highest overall accuracy (OA) in almost all the experiments, and the Kappa coefficient was similarly maintained at a high level. The integrated nature of the RF algorithm effectively reduces overfitting and improves the robustness of the model, resulting in an excellent performance. In addition, the ANN also performs well, especially when dealing with complex nonlinear relationships, and has a strong adaptive capability to ensure that the model captures important information in the high-dimensional feature space.
The experimental results reveal the different responses of different models to the classification performance under high-dimensional feature space. Specifically, in models such as RF, ANN, and CART, the experimental results (e.g., P1, P4, P5, P7, P8, P11, P12, P14, P15, P18, P19, and P21) show that as the feature dimensionality increases, these models are able to capture the potential correlations among multidimensional features more adequately, and this, in turn, improves the classification accuracy. This is mainly attributed to the advantages of RF, ANN, and CART in coping with high-dimensional data. RF, as an integrated-learning method based on stochastic feature selection, is able to dynamically filter the important features during the construction of each decision tree, thus attenuating the effects brought by the redundant features and enabling the models to be more adaptive and robust in high-dimensional spaces. Similarly, ANN constructs highly nonlinear discriminative boundaries in the feature space through multilayer neuron structure and weight adjustment and is able to capture complex patterns and inter-feature relationships. And the hierarchical decision structure of CART can gradually reduce the noise influence and achieve the effective use of multidimensional features. The performance of these models verifies the positive driving effect of feature-dimension increase on classification performance under specific machine-learning methods.
However, KNN and SVM show relatively poor adaptation when the feature dimension is increased, and the classification performance shows a significant decrease. In experiments P2, P3, P5, P9, P10, P13, P16, P17, and P20, despite the increase in feature dimensionality, the increase in feature dimensionality does not bring about an improvement in the classification ability of KNN and SVM, but rather a degradation in performance due to the “dimensional catastrophe” effect. For KNN, the Euclidean distance it relies on is no longer discriminative in high-dimensional space: the effectiveness of distance computation decreases dramatically, leading to blurring of the boundaries between categories and difficulty in guaranteeing the accuracy of classification, and the construction of hyperplanes for SVM is more susceptible to noise interference in high-dimensional space, mainly due to the introduction of redundant and noisy features that make it difficult to select the support vectors, thus affecting the accuracy of the classification boundaries. This phenomenon highlights the limitations of KNN and SVM in high-dimensional data.
The experimental groups that performed poorly in the experiments (e.g., P2, P3, P5, P9, P10, P13, P16, P17, and P20) were mainly dominated by shape features and texture features. These features are prone to generate noisy data during feature extraction due to their high redundancy and potential correlation, thus increasing the complexity of learning and reducing the learning efficiency of the model. Especially in models without feature-selection mechanisms, such as KNN and SVM, this kind of redundant data is more likely to affect the quality of classification decisions, whereas RF, ANN, and CART are more resistant to interference in the face of redundant features by virtue of their intrinsic feature screening or conditioning mechanisms. Overall, this study reveals that increasing feature dimensionality does not always lead to an improved classification performance in crop-classification applications, and that different models exhibit significant performance differences in coping with high-dimensional feature spaces. This finding provides practical guidance for the rational selection and optimization of high-dimensional feature data, emphasizing the importance of selecting the applicable model and controlling the dimensionality during the feature-fusion process to achieve more efficient and robust crop classification in high-dimensional feature space.
We excluded P2, P6, P9, P13, P16, and P20, which had lower accuracy rates, and comprehensively evaluated the performance of the crop-classification model by analyzing the producer accuracy and user accuracy under the remaining experimental scenarios (Figure 9 and Figure 10). Producer accuracy reflects the proportion of each category that is actually correctly classified, and a higher producer accuracy indicates that the model performs better in regard to recognizing a specific category. Across multiple experimental scenarios, producer accuracies remained high for most categories, especially for corn, rice, and soybean, where they were close to or reached 1, showing the efficiency of the model in these three categories. However, the producer accuracies for wheat in the P3 and P10 scenarios were too low, with large fluctuations in accuracy, which was caused by the small sample of wheat.
User precision, on the other hand, is another important measure of the reliability of the model’s classification results; it indicates the proportion of samples predicted to belong to a particular category that actually belong to that category. From the results of user precision, the user precision of most of the categories performed well across the experimental scenarios, showing that the classification results of these categories were reliable and stable. However, the user accuracies of soybean, red bean, and background fluctuated considerably, especially for soybean, which was lower than expected in scenario P3, implying that there was some confusion in the model’s prediction process for these three categories.
Overall, although the model suffers from accuracy fluctuations in regard to some categories, the classification strategy combining multispectral and visible features effectively improves the classification performance. The combined analyses of user accuracy and producer accuracy revealed the strengths and weaknesses of the model in regard to different categories, providing an important theoretical basis for future research and applications. These results highlight the significant influence of data features and model selection on crop-classification results and point the way for the further improvement of classification accuracy.

3.3. Feature Importance

In this paper, feature importance is calculated using Mean Decreasing Impurity (MDI) [61], based on the random forest classification algorithm. In random forests, each split node of a decision tree has an impurity (e.g., Gini impurity or information gain). Feature importance is quantified by calculating the contribution of each feature across all nodes in all trees. The formula for this is as follows:
Feature   Importance j = 1 T t = 1 T   m = 1 M t   Δ G i n i t , m
where T is the number of trees in the forest, M t is the number of nodes in the tth tree, and Δ G i n i t , m is the Gini impurity reduction of feature j at the mth node of the tth tree.
In the feature importance analysis of this study, different features exhibited significant variations in their impact on crop classification, highlighting the unique contributions of visible and multispectral images (Figure 11). Through our comparative analysis, we found that the overall classification accuracy was significantly enhanced after the integration of features based on visible and multispectral images.
Firstly, regarding vegetation indices, indices such as NPCI, VARI, NDGI, and NGBDI were found to be highly important across multiple experiments. Features generated from visible-light bands, particularly NPCI, performed exceptionally well in experiments P1 and P4. These indices are closely tied to the health status, greenness, and variations in vegetation cover of crops, thus providing rich information for crop classification. The high importance of these features arises from their ability to directly capture the physiological condition of plants without relying on complex postprocessing.
In contrast, shape features like “asymmetry”, “density”, and “shape index” showed limited impact on crop classification in experimental schemes P2 and P9. Although these features are beneficial in object-recognition and -segmentation tasks, their information content is limited in fine-scale crop-classification tasks, particularly in large agricultural plots, where the shape differences among crops may not be distinctive enough, thereby reducing their effectiveness in distinguishing between crop types.
The performance of texture features was notably strong, especially among those features based on the gray-level co-occurrence matrix (GLCM), such as “GLCM standard deviation”, “GLCM homogeneity”, and “GLCM mean”, which excelled in experimental schemes P3, P6, and P10. Texture features effectively capture the microscopic structural differences of crop canopies, particularly in multispectral images, where they reveal differences in surface reflectance characteristics among different crops, further enhancing classification accuracy. This indicates that GLCM-based texture features can detect subtle differences in crops, significantly improving the performance of classification models, especially in data derived from multispectral images.
The introduction of multispectral images was one of the key factors in improving classification accuracy. For instance, in experimental schemes P8 to P14, several vegetation indices derived from multispectral bands, such as SIPI, CIRE, and NDVI (re), demonstrated very high importance. These multispectral features capture physiological information about plants that visible light cannot acquire, such as how near-infrared spectra can provide insights into crop moisture content and photosynthetic efficiency. Therefore, these features excel in accurately identifying different crop types. Compared to features based solely on visible-light bands, multispectral features provide a more comprehensive reflection of the optical characteristics of crops, endowing the classification model with stronger generalization capabilities.
The improvement in classification accuracy is particularly significant in the experiments after fusion of RGB and multispectral image features. The fused features, such as “GLCM homogeneity” and “GLCM similarity”, combined information from both visible and multispectral bands, enabling a thorough analysis across different spectral ranges. The visible-light bands provided fundamental morphological and color information about crops, while the multispectral bands supplemented additional details regarding crop health and chlorophyll content. This complementary information greatly enhanced the model’s ability to differentiate among crops.
In summary, 15 key features were identified in this study, including multiple vegetation indices based on visible and multispectral bands, GLCM texture features, and certain shape features. Of these, the multispectral band features and texture features are particularly important for improving classification accuracy, as they provide a comprehensive display of multidimensional information about the crop. Feature fusion of RGB and multispectral images provides a more comprehensive basis for accurate classification. With this fusion technique, the model can capture deeper details of the crop and achieve high-accuracy classification of agricultural products.

3.4. Crop-Distribution Map and Confusion Matrix

The category-distribution maps and confusion matrices generated using the optimal scheme (P21) demonstrate the advantages of the overall classification performance, while also exposing some classification errors (Figure 12 and Figure 13). Overall, the scheme accurately reflects the category distribution in most areas, validating its ability to classify with high accuracy in most areas. However, the distribution of errors in the map also highlights the challenges faced in classifying certain categories.
Firstly, in the corn-category area, some regions labeled as corn were incorrectly classified as red beans. This error may arise from the spectral similarity between corn and red beans, particularly during the feature-fusion process, where overlaps in certain spectral bands can make it difficult for the classifier to distinguish subtle differences between the two crops. To improve classification accuracy, future research could consider enhancing the differential analysis of features for corn and red beans, or adjusting feature weights to reduce such misclassification occurrences.
Secondly, some red bean areas were misclassified as soybeans. This error might be related to the similarities in vegetation indices and shape features between red beans and soybeans. Although multispectral data provide abundant information, under certain conditions, the features between red beans and soybeans may overlap, affecting classification accuracy. Increasing the detailed analysis of features for these two crops and conducting more precise feature extraction and fusion could help reduce such errors.
Lastly, a small section of rice was misclassified as soybean, indicating some challenges in classifying the spectral features of rice and soybeans. While rice and soybean classification generally performed well, the misclassification in localized areas reveals issues related to spectral overlap between the two. To address this situation, further optimization of data preprocessing and feature selection—especially in distinguishing vegetation indices and texture features—could enhance classification accuracy.
In summary, while the optimal scheme (P21) demonstrates an excellent overall classification performance, the misclassification of certain categories highlights directions for further improvement. By conducting detailed analyses of these misclassified areas and optimizing for specific issues, the overall accuracy and reliability of the classification model can be enhanced. These analytical results provide a valuable direction for future research, aiding in achieving more efficient classification and analysis in more complex application scenarios.

4. Discussion

In this study, by fusing visible images and multispectral data acquired by UAVs and applying classical classifiers such as random forest (RF), artificial neural networks (ANNs), CART decision tree, KNN, and support vector machines (SVMs), we evaluated the effects of different feature combinations on crop-classification performance. In contrast, classification methods using a single data source (visible or multispectral) have been mostly used in existing studies with lower accuracy. For example, an object-oriented approach was used to analyze UAV visible-light data, and only 86.4% and 89.23% overall accuracies were achieved [62,63]. Moreover, the classification accuracies based on RGB images in this paper range from 54% to 93%, which is a poor performance. However, the classification accuracy was significantly improved by using multispectral data with near-infrared and red-edge bands. Deng et al. evaluated a method integrating an object-oriented approach and the random forest (RF) algorithm based on UAV multispectral data, achieving an overall accuracy of 92.76% [64]. Yang et al. [65] used the ReliefF algorithm to filter spectral features and texture features respectively based on UAV multispectral images, combined with SVM support vector machine for classification, with an over-all accuracy of 92.01%. This suggests that the use of multispectral data can effectively improve the accuracy of crop classification. However, these studies tend to be limited to a single data source and do not explore the potential of multi-feature fusion. In addition, the findings also show that RGB images are prone to the “pretzel effect” during classification, which can be mitigated by the introduction of multispectral images. This is due to the lack of near-infrared (NIR)- and infrared (IR)-band data in RGB images that limits the differentiability of different land-cover types. These findings are consistent with those of previous studies [66,67,68,69].
In the visible and multispectral image feature fusion experiments, the performance of scheme P21 is particularly outstanding, with an accuracy of 97.77%, indicating that the fused features can significantly improve the classification accuracy. The fused features not only enhance the differentiation ability of crop categories but also effectively make up for the deficiencies of single data sources in spectral coverage and feature extraction. This result supports the strategy of using multi-source data fusion in crop classification, especially when confronted with crop classes with similar spectral features, as the fused features can provide more comprehensive information support. Similarly, Ayyappa et al. achieved the highest classification accuracy of 93.14% by combining Sentinel-2A satellite imagery with UAV imagery and fusing the images using the Principal Component Analysis (PAN) sharpening technique, which ultimately achieved the highest classification accuracy of 93.14% using the RF model [70]. However, when a large number of features are extracted from multiple data sources, the dimensionality of the feature vectors can be very high, which leads to an increase in computational cost and may create a “2D disaster” problem. Features from different sources may sometimes contain duplicate or correlated information, leading to degradation of model performance, which needs to be addressed by feature selection or dimensionality reduction techniques.
As a representative integrated-learning algorithm, the RF classifier has achieved good results in the automatic extraction of remote-sensing information [71,72]. The RF classifier is more suitable for large samples and high-dimensional data and therefore requires a sufficient number of samples [73]. The SVM classifier is specifically designed for analyzing a small number of samples [74]. According to the experimental results, compared with the KNN, decision tree, SVM, and ANN, the RF is able to generate more accurate classifications. This result is consistent with that of other studies, which also demonstrated the superiority of RF in crop classification [75,76,77].
This study relies on images captured within a specific time frame. Given that various crops exhibit variations in spectral and morphological characteristics at different growth stages [78,79], relying on results from a single imaging cycle alone is not sufficiently representative of conditions at other times.

5. Conclusions

This paper presents an advanced crop-classification method based on the fusion of drone-acquired visible-light and multispectral data, aiming to enhance the accuracy of crop classification and evaluate the impact of data fusion on improving classification precision. Using the random forest (RF) classification algorithm, we conducted a systematic analysis of different feature combinations and their contribution to classification outcomes. The experimental results demonstrate that integrating visible-light and multispectral data significantly improves classification accuracy, achieving a high overall accuracy of 97.77%. This not only highlights the importance of data fusion in enhancing the stability and generalization ability of the model but also allows for a more comprehensive utilization of diverse image information. In terms of feature selection, band values, vegetation indices, texture features, and shape features were identified as key factors influencing classification outcomes. The effective combination of these features provides strong support for the accurate identification of crops in complex terrains and diverse agricultural settings. The findings of this study provide theoretical insights and practical guidance for effective implementation of precision agriculture, emphasizing the potential of remote-sensing technology in agricultural monitoring.
The findings also highlight the importance of multi-source data fusion, particularly when dealing with crops that exhibit similar spectral characteristics, as it provides a wider spectrum of information for more reliable classifications. The use of RF as an ensemble-learning algorithm has demonstrated its superior performance in this context, proving its suitability for large datasets and high-dimensional feature spaces commonly encountered in crop-classification tasks. Looking ahead, we recommend further research in feature selection and model optimization to improve classification accuracy. Additionally, expanding the scope of study to account for the influences of different climatic and geographical conditions on crop growth will enhance the model’s robustness and ensure its broader applicability. Including temporal data from different crop-growth stages may also provide deeper insights into seasonal variations, potentially resulting in more adaptable classification models.
In summary, this study not only demonstrates the effectiveness of drone-based remote-sensing technology in crop classification but also provides a scientific basis for future agricultural management and decision-making. The results encourage the integration of advanced remote-sensing methods into sustainable agricultural practices, offering a path forward for improving precision agriculture and supporting the long-term development of the agricultural sector.

Author Contributions

Conceptualization, W.Y.; methodology, Q.L.; software, Z.Z.; validation, Z.Z.; formal analysis, P.K.; investigation, J.Y.; resources, H.Y.; data curation, L.G.; writing—original draft preparation, Z.Z.; writing—review and editing, L.G.; visualization, Z.Z.; supervision, L.G.; project administration, H.Y.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Innovation Program of AII-CAAS (CAAS-ASTIP-2024-AII); Supported by the National Key R&D Program of China (2021ZD0110901).

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kwak, G.-H.; Park, N.-W. Impact of Texture Information on Crop Classification with Machine Learning and UAV Images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef]
  2. Kim, Y.; Park, N.-W.; Lee, K.-D. Self-Learning Based Land-Cover Classification Using Sequential Class Patterns from Past Land-Cover Maps. Remote Sens. 2017, 9, 921. [Google Scholar] [CrossRef]
  3. Al-Awar, B.; Awad, M.M.; Jarlan, L.; Courault, D. Evaluation of Nonparametric Machine-Learning Algorithms for an Optimal Crop Classification Using Big Data Reduction Strategy. Remote Sens. Earth Syst. Sci. 2022, 5, 141–153. [Google Scholar] [CrossRef]
  4. Song, T.Q.; Zhang, X.Y.; Li, J.X.; Fan, H.S.; Sun, Y.Y.; Zong, D.; Liu, T.X. Research on application of deep learning in multi-temporal greenhouse extraction. Comput. Eng. Appl. 2020, 5, 12. [Google Scholar]
  5. Navalgund, R.R.; Jayaraman, V.; Roy, P.S. Remote Sensing Applications: An Overview. Curr. Sci. 2007, 93, 1747–1766. [Google Scholar]
  6. Seelan, S.K.; Laguette, S.; Casady, G.M.; Seielstad, G.A. Remote sensing applications for precision agriculture: A learning community approach. Remote Sens. Environ. 2003, 88, 157–169. [Google Scholar] [CrossRef]
  7. Hufkens, K.; Melaas, E.K.; Mann, M.L.; Foster, T.; Ceballos, F.; Robles, M.; Kramer, B. Monitoring crop phenology using a smartphone based near-surface remote sensing approach. Agric. For. Meteorol. 2019, 265, 327–337. [Google Scholar] [CrossRef]
  8. Sivakumar, M.V.K.; Roy, P.S.; Harmsen, K.; Saha, S.K. Satellite remote sensing and gis applications in agricultural meteorology. In Proceedings of the Training Workshop, Dehradun, India, 7–11 July 2003. [Google Scholar]
  9. Li, Z.M.; Zhao, J.; Lan, Y.B.; Cui, X.; Yang, H.B. Crop classification based on UAV visible image. J. Northwest A F Univ. (Nat. Sci. Ed.) 2019, 11, 27. [Google Scholar]
  10. Malamiri, H.R.G.; Aliabad, F.A.; Shojaei, S.; Morad, M.; Band, S.S. A study on the use of UAV images to improve the separation accuracy of agricultural land areas. Comput. Electron. Agric. 2021, 184, 106079. [Google Scholar] [CrossRef]
  11. Rodríguez, J.; Lizarazo, I.; Prieto, F.A.; Morales, V.D.A. Assessment of potato late blight from UAV-based multispectral imagery. Comput. Electron. Agric. 2021, 184, 106061. [Google Scholar] [CrossRef]
  12. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  13. Wan, L.; Li, Y.; Cen, H.; Zhu, J.; Yin, W.; Wu, W.; Zhu, H.; Sun, D.; Zhou, W.; He, Y. Combining UAV-based vegetation indices and image classification to estimate flower number in oilseed rape. Remote Sens. 2018, 10, 1484. [Google Scholar] [CrossRef]
  14. Li, L.; Mu, X.; Macfarlane, C.; Song, W.; Chen, J.; Yan, K.; Yan, G. A half-Gaussian fitting method for estimating fractional vegetation cover of corn crops using unmanned aerial vehicle images. Agric. For. Meteorol. 2018, 262, 379–390. [Google Scholar] [CrossRef]
  15. Marcaccio, J.V.; Markle, C.E.; Chow-Fraser, P. Unmanned Aerial Vehicles Produce High-Resolution, Seasonally-Relevant Imagery for Classifying Wetland Vegetation. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, XL-1/W4, 249–256. [Google Scholar] [CrossRef]
  16. Yu, J.; Shan, L.; Li, F. Application of multi-source image fusion technology in UAV. Radio Eng. J. 2019, 49, 581–586. [Google Scholar]
  17. Torres-Sánchez, J.; López-Granados, F.; Peña, J.M. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef]
  18. Shackelford, A.K.; Davis, C.H. A combined fuzzy pixel-based and object-based approach for classification of high-resolution multispectral data over urban areas. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2354–2363. [Google Scholar] [CrossRef]
  19. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sánchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  20. Du, P.; Samat, A.; Waske, B.; Liu, S.; Li, Z. Random Forest and Rotation Forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J. Photogramm. Remote Sens. 2015, 105, 38–53. [Google Scholar] [CrossRef]
  21. Radočaj, D.; Jurišić, M.; Gašparović, M.; Plaščak, I.; Antonić, O. Cropland Suitability Assessment Using Satellite-Based Biophysical Vegetation Properties and Machine Learning. Agronomy 2021, 11, 1620. [Google Scholar] [CrossRef]
  22. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 87–98. [Google Scholar] [CrossRef]
  23. Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based crop and weed classification for smart farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031. [Google Scholar] [CrossRef]
  24. Peñá-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  25. Duke, O.P.; Alabi, T.; Neeti, N.; Adewopo, J.B. Comparison of UAV and SAR performance for Crop type classification using machine learning algorithms: A case study of humid forest ecology experimental research site of West Africa. Int. J. Remote Sens. 2022, 43, 4259–4286. [Google Scholar] [CrossRef]
  26. Zhang, X.; Zhang, F.; Qi, Y.; Deng, L.; Wang, X.; Yang, S. New research methods for vegetation information extraction based on visible light remote sensing images from an unmanned aerial vehicle (UAV). Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 215–226. [Google Scholar] [CrossRef]
  27. Barrero, O.; Perdomo, S.A. RGB and multispectral UAV image fusion for Gramineae weed detection in rice fields. Precis. Agric. 2018, 19, 809–822. [Google Scholar] [CrossRef]
  28. Cimtay, Y.; Özbay, B.; Yilmaz, G.; Bozdemir, E. A New Vegetation Index in Short-Wave Infrared Region of Electromagnetic Spectrum. IEEE Access 2021, 9, 148535–148545. [Google Scholar] [CrossRef]
  29. Fan, Y.-G.; Feng, H.-K.; Liu, Y.; Bian, M.-B.; Zhao, Y.; Yang, G.-J.; Qian, J.-G. Estimation of potato plant nitrogen content using UAV multi-source sensor information. Spectrosc. Spect. Anal. 2022, 42, 3217–3225. [Google Scholar]
  30. Ma, Y.; Bian, M.; Fan, Y.; Chen, Z.; Yang, G.; Feng, H. Estimation of potassium content of potato plants based on UAV RGB images. Trans. Chin. Soc. Agric. Mach. 2023, 54, 196–203+233. [Google Scholar] [CrossRef]
  31. Bauer, S.D.; Korč, F.; Förstner, W. The potential of automatic methods of classification to identify leaf diseases from multispectral images. Precis. Agric. 2011, 12, 361–377. [Google Scholar] [CrossRef]
  32. AHuete, R.; Liu, H.; de Lira, G.R.; Batchily, K.; Escadafal, R. A soil color index to adjust for soil and litter noise in vegetation index imagery of arid regions. In Proceedings of the IGARSS’94—1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; Volume 2, pp. 1042–1043. [Google Scholar] [CrossRef]
  33. Miura, T.; Huete, A.R.; Yoshioka, H. Evaluation of sensor calibration uncertainties on vegetation indices for MODIS. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1399–1409. [Google Scholar] [CrossRef]
  34. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASABE 1998, 41, 1189–1197. [Google Scholar] [CrossRef]
  35. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  36. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. In Proceedings of the Precision Agriculture and Biological Quality, Boston, MA, USA, 3–4 November 1998. [Google Scholar]
  37. Daughtry, C.S.T.; Gallo, K.; Goward, S.N.; Prince, S.D.; Kustas, W.P. Spectral estimates of absorbed radiation and phytomass production in corn and soybean canopies. Remote Sens. Environ. 1992, 39, 141–152. [Google Scholar] [CrossRef]
  38. Chen, J.; Josef, C.; Jing, C. Retrieving leaf area index of boreal conifer forests using Landsat TM images. Remote Sens. Environ. 2000, 162, 153–162. [Google Scholar]
  39. Lyon, J.G.; Yuan, D.; Lunetta, R.; Elvidge, C.D. A change detection experiment using vegetation indices. Photogramm. Eng. Remote Sens. 1998, 64, 143–150. [Google Scholar]
  40. Rouse, J.W.; Haas, R.H.; Deering, D.W.; Schell, J.A.; Harlan, J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; Great Plains Corridor; NTRS-NASA Technical Reports Server: Washington, DC, USA, 1973. [Google Scholar]
  41. Verrelst, J.; Schaepman, M.E.; Koetz, B.; Kneubühler, M. Angular sensitivity analysis of vegetation indices derived from CHRIS/PROBA data. Remote Sens. Environ. 2008, 112, 2341–2353. [Google Scholar] [CrossRef]
  42. Clay, D.E.; Kim, K.-I.; Chang, J.; Clay, S.A.; Dalsted, K. Characterizing Water and Nitrogen Stress in Corn Using Remote Sensing. Agron. J. 2006, 98, 579–587. [Google Scholar] [CrossRef]
  43. Kross, A.; McNairn, H.; Lapen, D.; Sunohara, M.; Champagne, C. Assessment of Rapid Eye vegetation indices for estimation of leaf area index and biomass in corn and soybean crops. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 235–248. [Google Scholar]
  44. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  45. Baloloy, A.B.; Blanco, A.C.; Candido, C.G.; Argamosa, R.J.L.; Dumalag, J.B.L.C.; Dimapilis, L.L.C.; Paringit, E.C. Estimation of mangrove forest aboveground biomass using multispectral bands, vegetation indices and biophysical variables derived from optical satellite imageries: Rapid eye, planet scope and sentinel-2. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 29–36. [Google Scholar] [CrossRef]
  46. Penuelas, J.; Frederic, B.; Filella, I. Semi-Empirical Indices to Assess Carotenoids/Chlorophyll-a Ratio from Leaf Spectral Reflectance. Photosynthetica 1995, 31, 221–230. [Google Scholar]
  47. Haboudanea, D.; Millera, J.R.; Patteyc, E.; Zarco-Tejadad, P.J.; Strachane, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  48. Tomás, A.d.; Nieto, H.; Guzinski, R.; Mendiguren, G.; Sandholt, I.; Berline, P. Multi-scale approach of the surface temperature/vegetation index triangle method for estimating evapotranspiration over heterogeneous landscapes. Geophys. Res. Abstr. 2012, 14, EGU2012-697. [Google Scholar]
  49. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  50. Ling, C.; Liu, H.; Ji, P.; Hu, H.; Wang, X.; Hou, R. Estimation of Vegetation Coverage Based on VDVI Index of UAV Visible Image—Using the Shelterbelt Research Area as An Example. For. Eng. 2021, 2, 57–66. [Google Scholar]
  51. Wen, L.; Yang, B.; Cui, C.; You, L.; Zhao, M. Ultrasound-Assisted Extraction of Phenolics from Longan (Dimocarpus longan Lour.) Fruit Seed with Artificial Neural Network and Their Antioxidant Activity. Food Anal. Methods 2012, 5, 1244–1251. [Google Scholar] [CrossRef]
  52. Gitelson, A.A. Wide Dynamic Range Vegetation Index for remote quantification of biophysical characteristics of vegetation. J. Plant Physiol. 2004, 161, 165–173. [Google Scholar] [CrossRef]
  53. Guo, Q.; Zhang, J.; Guo, S.; Ye, Z.; Deng, H.; Hou, X.; Zhang, H. Urban Tree Classification Based on Object-oriented Approach and Random Forest Algorithm Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery. Remote Sens. 2022, 14, 3885. [Google Scholar] [CrossRef]
  54. Garg, R.; Kumar, A.; Prateek, M.; Pandey, K.; Kumar, S. Land Cover Classification of Spaceborne Multifrequency SAR and Optical Multispectral Data Using Machine Learning. Adv. Space Res. 2022, 69, 1726–1742. [Google Scholar] [CrossRef]
  55. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  56. Ajayi, O.G.; Opaluwa, Y.D.; Ashi, J.; Zikirullahi, W.M. Applicability of artificial neural network for automatic crop type classification on UAV-based images. Environ. Technol. Sci. J. 2022, 13. [Google Scholar] [CrossRef]
  57. Antoniadis, A.; Cugliari, J.; Fasiolo, M.; Goude, Y.; Poggi, J.M. Random Forests. In Statistical Learning Tools for Electricity Load Forecasting. Statistics for Industry, Technology, and Engineering; Birkhäuser: Cham, Switzerland, 2024. [Google Scholar]
  58. Liu, B.; Shi, Y.; Duan, Y.; Wu, W. UAV-Based Crops Classification with Joint Features from Orthoimage and DSM Data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-3, 1023–1028. [Google Scholar] [CrossRef]
  59. Guo, W.; Gong, Z.; Gao, C.; Yue, J.; Fu, Y.; Sun, H.; Zhang, H.; Zhou, L. An accurate monitoring method of peanut southern blight using unmanned aerial vehicle remote sensing. Precis. Agric. 2024, 25, 1857–1876. [Google Scholar] [CrossRef]
  60. Lan, Y.; Huang, Z.; Deng, X.; Zhu, Z.; Huang, H.; Zheng, Z.; Lian, B.; Zeng, G.; Tong, Z. Comparison of machine learning methods for citrus greening detection on UAV multispectral images. Comput. Electron. Agric. 2020, 171, 105234. [Google Scholar] [CrossRef]
  61. Krzywinski, M.; Altman, N. Classification and regression trees. Nat. Methods 2017, 14, 757–758. [Google Scholar] [CrossRef]
  62. Xu, W.C.; Lan, Y.B.; Li, Y.H.; Luo, Y.F.; He, Z.Y. Classification method of cultivated land based on UAV visible light remote sensing. Int. J. Agric. Biol. Eng. 2019, 12, 103–109. [Google Scholar] [CrossRef]
  63. Peng, X.; Xue, W.; Luo, Y.; Zhao, Z. Precise classification of cultivated land based on visible remote sensing image of UAV. Int. J. Agric. Biol. Eng. 2019, 21, 79–86. [Google Scholar]
  64. Deng, H.; Zhang, W.; Zheng, X.; Zhang, H. Crop Classification Combining Object-Oriented Method and Random Forest Model Using Unmanned Aerial Vehicle (UAV) Multispectral Image. Agriculture 2024, 14, 548. [Google Scholar] [CrossRef]
  65. Yang, D.J.; Zhao, J.; Lan, Y.B.; Wen, Y.T.; Pan, F.J.; Cao, D.L.; Hu, C.X.; Guo, J.K. Research on farmland crop classification based on UAV multispectral remote sensing images. Int. J. Precis Agric. Aviat. 2021, 4, 29–35. [Google Scholar] [CrossRef]
  66. Yang, M.D.; Huang, K.S.; Kuo, Y.H.; Hui, T.; Lin, L.M. Spatial and spectral hybrid image classification for rice lodging assessment through UAV imagery. Remote Sens. 2017, 9, 583. [Google Scholar] [CrossRef]
  67. Hunt, E.R., Jr.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.T.; McCarty, G.W. Acquisition of NIR-green–blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens. 2010, 2, 290–305. [Google Scholar] [CrossRef]
  68. Liu, T.; Abd-Elrahman, A. Multi-view object-based classification of wetland land covers using unmanned aircraft system images. Remote Sens. Environ. 2018, 216, 122–138. [Google Scholar] [CrossRef]
  69. Feng, Q.; Liu, J.; Gong, J. UAV Remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  70. Allu, A.R.; Mesapam, S. Fusion of different multispectral band combinations of Sentinel-2A with UAV imagery for crop classification. J. Appl. Remote Sens. 2024, 18, 016511. [Google Scholar] [CrossRef]
  71. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  72. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef]
  73. Fradkin, D.; Muchnik, I. Support vector machines for classification. DIMACS Ser. Discrete. Math. Theor. Comput. Sci. 2006, 70, 13–20. [Google Scholar]
  74. Jannoura, R.; Brinkmann, K.; Uteau, D.; Bruns, C.; Joergensen, R.G. Monitoring of crop biomass using true colour aerial photographs taken from a remote controlled hexacopter. Biosyst. Eng. 2019, 129, 341–351. [Google Scholar] [CrossRef]
  75. Zhu, J.; Pan, Z.; Wang, H.; Huang, P.; Sun, J.; Qin, F.; Liu, Z. An Improved Multi-temporal and Multi-feature Tea Plantation Identification Method Using Sentinel-2 Imagery. Sensors 2019, 19, 2087. [Google Scholar] [CrossRef]
  76. Huang, X. High Resolution Remote Sensing Image Classification Based on Deep Transfer Learning and Multi Feature Network. IEEE Access 2023, 11, 110075–110085. [Google Scholar] [CrossRef]
  77. Thakur, R.; Panse, P. Classification Performance of Land Use from Multispectral Remote Sensing Images Using Decision Tree, K-Nearest Neighbor, Random Forest and Support Vector Machine Using EuroSAT Data. Int. J. Intell. Syst. Appl. Eng. 2022, 10, 67–77. [Google Scholar]
  78. Wang, R.; Shi, W.; Kronzucker, H.; Li, Y. Oxygenation Promotes Vegetable Growth by Enhancing P Nutrient Availability and Facilitating a Stable Soil Bacterial Community in Compacted Soil. Soil. Tillage Res. 2023, 230, 105686. [Google Scholar] [CrossRef]
  79. Sharma, R.C. Dominant Species-Physiognomy-Ecological (DSPE) System for the Classification of Plant Ecological Communities from Remote Sensing Images. Ecologies 2022, 3, 25. [Google Scholar] [CrossRef]
Figure 1. Schematic of the location of the study area. (Yellow stars are specific locations of data collection sites.)
Figure 1. Schematic of the location of the study area. (Yellow stars are specific locations of data collection sites.)
Agronomy 14 02670 g001
Figure 2. Phenological periods of crops.
Figure 2. Phenological periods of crops.
Agronomy 14 02670 g002
Figure 3. Orthophotos of visible-light and multispectral images.
Figure 3. Orthophotos of visible-light and multispectral images.
Agronomy 14 02670 g003
Figure 4. Technology roadmap.
Figure 4. Technology roadmap.
Agronomy 14 02670 g004
Figure 5. Schematic diagram of visible and multispectral image samples.
Figure 5. Schematic diagram of visible and multispectral image samples.
Agronomy 14 02670 g005
Figure 6. Schematic representation of random point locations used for accuracy assessment.
Figure 6. Schematic representation of random point locations used for accuracy assessment.
Agronomy 14 02670 g006
Figure 7. Visible light multi-scale segmentation results.
Figure 7. Visible light multi-scale segmentation results.
Agronomy 14 02670 g007
Figure 8. Multiscale segmentation results of multispectral images.
Figure 8. Multiscale segmentation results of multispectral images.
Agronomy 14 02670 g008
Figure 9. Producer accuracy for different categories and programs.
Figure 9. Producer accuracy for different categories and programs.
Agronomy 14 02670 g009
Figure 10. User accuracy for different categories and programs.
Figure 10. User accuracy for different categories and programs.
Agronomy 14 02670 g010
Figure 11. The 15 most important features of random forest based on P1–P21 (P2, P9, and P16 have only 11 features). The vertical coordinates indicate the different features. The horizontal coordinate indicates the contribution of the feature to the reduction of impurities in the random forest model. The larger the value, the greater the importance of the feature to the model decision. IFF is a fusion feature of RGB and multispectral bands.
Figure 11. The 15 most important features of random forest based on P1–P21 (P2, P9, and P16 have only 11 features). The vertical coordinates indicate the different features. The horizontal coordinate indicates the contribution of the feature to the reduction of impurities in the random forest model. The larger the value, the greater the importance of the feature to the model decision. IFF is a fusion feature of RGB and multispectral bands.
Agronomy 14 02670 g011
Figure 12. Classification results of the optimal scheme based on the random forest model.
Figure 12. Classification results of the optimal scheme based on the random forest model.
Agronomy 14 02670 g012
Figure 13. Confusion matrix generated based on the optimal solution.
Figure 13. Confusion matrix generated based on the optimal solution.
Agronomy 14 02670 g013
Table 1. Selected parameters of the drone and its sensors.
Table 1. Selected parameters of the drone and its sensors.
UAV and SensorsTechnical SpecificationsSpecific Values
UAVTotal weight (including battery and lens)7.1 kg
Maximum horizontal flight Speed (automatic mode)17 m/s
Maximum flight time55 min
Symmetrical motor axial distance895 mm
DJI Zenmuse P1LensDJI Zenmuse P1\35 mm\FOV 63.5°
Image size3:2 (8192 × 5460)
Effective pixels45 million
Aperture rangef/2.8–f/16
Spectral rangeR: 620–750 nm
G: 490–570 nm
B: 450–490 nm
MS600 ProEffective pixels1.2 Mpx
Spectral channels450 nm @ 35 nm, 555 nm @ 27 nm, 660 nm @ 22 nm, 720 nm @ 10 nm, 750 nm @ 10 nm, 840 nm @ 30 nm
Image format16-bit Raw TIFF & 8-bit Reflectance JPEG
Coverage width110 m × 83 m @ h120 m
Table 2. Details of the vegetation indices used in this study.
Table 2. Details of the vegetation indices used in this study.
Vegetation IndexAbbreviationFormulaBrief DescriptionSource
Chlorophyll Index–Green LightCIg N I R G 1 Evaluate chlorophyll content in leaves*
Chlorophyll index–red edgeCIre N I R R E 1 Evaluate chlorophyll content in leaves*
Vegetation Color IndexCIVE 0.441 R 0.881 G + 0.385 B + 18.78745 Reflects the color characteristics of vegetation, can be used to identify vegetation types and estimate biomass[32]
Enhanced Vegetation
Index
EVI 2.5 ( N I R R ) Reduces the influence of atmospheric and soil noise, providing a stable response to the vegetation condition in the measured area[33]
Excess Green IndexEXG 2 G R B Used for detecting vegetation[34]
Excess Green Minus Red IndexEXGR 2 G 2 . 4 R Can effectively distinguish green vegetation from non-vegetated areas in complex backgrounds [35]
Excess Red IndexEXR 1.4 R B Used for detecting non-vegetated areas[36]
Normalized Green Difference Vegetation IndexGNDVI ( N I R G ) / ( N I R + G ) Used to enhance the detection and analysis of green vegetation[37]
Modified second ratio
index
MSRI ( N I R R 1 ) / N I R / R + 1 Enhances vegetation signals while reducing atmospheric interference and soil background noise[38]
Normalized difference
green degree index
NDGI ( G R ) / ( G + R ) Evaluate the greenness of vegetation [39]
Normalized Difference
Vegetation Index
NDVI ( N I R     R ) / ( N I R + R ) Used to measure vegetation coverage and health status[40]
Red-edge normalized difference vegetation indexNDVIre ( N I R     RE ) / ( N I R + RE ) Evaluate the health status of vegetation*
Normalized Green–Blue
Difference Index
NGBDI ( G B ) / ( G + B ) Identify vegetated areas and reflect their health status[41]
Chlorophyll Normalized
Vegetation Index
NPCI ( R B ) / ( R + B ) Assess chlorophyll content and photosynthetic capacity of plants[42]
Red Edge Triangle Vegetation IndexRTVICore ( 100 × ( N I R R E ) 10 × ( N I R G ) ) Evaluate the leaf area index and biomass vegetation index[43]
Source Address
Validation Improvement
SAVI ( N I R     R ) / ( N I R + R + 0.5 ) × 1.5 Reduce soil background influence[44]
Structure-Independent
Pigment Index
SIPI ( N I R     B ) / ( N I R + B ) Monitor vegetation health, detect plant physiological stress, and analyze crop yield[45]
simple ratioSR N I R R Common vegetation index for assessing vegetation quantity[46]
Transform Chlorophyll
Absorption Index
TCARI 3 [ ( R E R ) 0.2 ( R E G ) ( R E / R ) ] Evaluate chlorophyll content and plant health [47]
Triangle Vegetation IndexTVI 60 ( N I R     G ) 100 ( R     G ) Indicate the relationship between the absorbed radiant energy by vegetation and the reflectance in red, green, and near-infrared bands, useful for monitoring crop biomass[48]
Visible Atmospherically
Resistant Index
VARI ( G R ) / ( G + R B ) Reduce the impact of atmospheric conditions on vegetation index calculations[49]
Visible–band Difference
Vegetation Index
VDVI [ 2 G     ( R + B ) ] / [ 2 G + ( R + B ) ] Utilize differences in the visible spectrum to assess vegetation health and coverage[50]
The vegetative indexVEG G / R α B ( 1 α ) , α = 0.667 Vegetation index based on RGB channel information, used to estimate grassland vegetation coverage[51]
Wide Dynamic Range
Vegetation Index
WDRVI ( N I R R ) / ( N I R + R ) Overcome the sensitivity reduction of NDVI under medium-to-high-density green biomass[52]
* Source: Official Documentation from ArcGIS Pro 3.3 Help (https://pro.arcgis.com/zh-cn/pro-app/latest/arcpy/image-analyst/bai.htm (accessed on 7 November 2024)).
Table 3. Experimental protocol, including 21 classification schemes.
Table 3. Experimental protocol, including 21 classification schemes.
NumberFeaturesSource of Features
P1SFVisible light
P2GEOM
P3GLCM
P4SF + GEOM
P5SF + GLCM
P6GEOM + GLCM
P7ALL
P8SFMultispectral
P9GEOM
P10GLCM
P11SF + GEOM
P12SF + GLCM
P13GEOM + GLCM
P14ALL
P15SFVisible light + multispectral
P16GEOM
P17GLCM
P18SF + GEOM
P19SF + GLCM
P20GEOM + GLCM
P21ALL
Table 4. Number of plots per category training sample and number of random points used for accuracy evaluation.
Table 4. Number of plots per category training sample and number of random points used for accuracy evaluation.
ClassNumber of Training Sample PlotsNumber of Random Points
Background159610
Corn46435
Red bean1061549
Poplar48357
Soybean66518
Wheat14109
Rice661329
Road30239
Table 5. Key parameters of the models used in this paper.
Table 5. Key parameters of the models used in this paper.
ModelsPackages in the R Language4.2.1Key Parameters
RFrandomForestMax tree number = 500
Min sample count = 1
Max categories = 16
mtry = 3
SVMe1071Kernel = linear
C = 2
gamma = 0.1
KNNclassK = 1
CARTrpartMax categories = 16
minsplit = 20
Maxdepth = 10
Complexity parameter = 0.01
ANNnnetsize = 10
Maxit = 500
decay = 0.01
learningrate = 0.01
Table 6. Classification accuracy of different schemes.
Table 6. Classification accuracy of different schemes.
Experimental ProtocolRFSVMKNNCARTANN
OAKappaOAKappaOAKappaOAKappaOAKappa
P192.48%0.91 80.96%0.78 89.66%0.88 85.90%0.84 92.36%0.90
P254.17%0.47 46.52%0.39 44.30%0.36 50.06%0.43 55.17%0.47
P381.08%0.78 37.18%0.21 75.44%0.72 67.33%0.63 80.59%0.78
P493.54%0.93 83.78%0.78 93.74%0.91 88.13%0.86 92.89%0.92
P592.60%0.91 74.15%0.67 79.08%0.76 88.37%0.87 91.24%0.91
P682.37%0.80 38.76%0.25 72.50%0.68 76.73%0.73 83.76%0.81
P793.07%0.92 73.13%0.67 78.65%0.74 88.25%0.86 92.17%0.91
P893.42%0.92 58.34%0.55 68.50%0.64 87.66%0.86 91.25%0.90
P965.10%0.60 58.34%0.49 55.58%0.49 53.11%0.46 66.28%0.61
P1087.90%0.86 47.85%0.39 61.27%0.59 77.44%0.74 85.49%0.83
P1194.60%0.94 57.56%0.51 68.73%0.59 88.25%0.86 93.66%0.92
P1296.83%0.96 39.86%0.29 50.13%0.47 93.42%0.92 97.23%0.96
P1388.01%0.86 60.76%0.51 70.11%0.68 76.26%0.73 89.23%0.88
P1496.83%0.96 33.86%0.22 45.27%0.43 94.71%0.94 96.25%0.95
P1595.30%0.95 64.50%0.58 74.38%0.71 85.31%0.83 94.56%0.93
P1666.16%0.61 59.72%0.51 63.16%0.61 54.17%0.47 67.25%0.62
P1792.60%0.91 54.43%0.56 53.25%0.51 80.14%0.77 93.05%0.91
P1896.36%0.96 64.77%0.61 71.26%0.69 94.36%0.93 94.89%0.93
P1997.06%0.97 36.69%0.25 46.28%0.44 89.42%0.88 96.85%0.95
P2093.65%0.93 64.07%0.55 69.27%0.63 85.08%0.83 94.25%0.93
P2197.77%0.9742.80%0.31 51.75%0.49 93.30%0.92 96.28%0.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Z.; Yuan, J.; Yao, W.; Kwan, P.; Yao, H.; Liu, Q.; Guo, L. Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification. Agronomy 2024, 14, 2670. https://doi.org/10.3390/agronomy14112670

AMA Style

Zheng Z, Yuan J, Yao W, Kwan P, Yao H, Liu Q, Guo L. Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification. Agronomy. 2024; 14(11):2670. https://doi.org/10.3390/agronomy14112670

Chicago/Turabian Style

Zheng, Zuojun, Jianghao Yuan, Wei Yao, Paul Kwan, Hongxun Yao, Qingzhi Liu, and Leifeng Guo. 2024. "Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification" Agronomy 14, no. 11: 2670. https://doi.org/10.3390/agronomy14112670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop