Next Article in Journal
Beyond Agriculture 4.0: Design and Development of Modern Agricultural Machines and Production Systems
Previous Article in Journal
The Effects of Intercropping Narrowleaf Lupine with Cereals under Variable Mineral Nitrogen Fertilization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Early Crop Identification Study Based on Sentinel-1/2 Images with Feature Optimization Strategy

1
College of Agronomy, Inner Mongolia Agricultural University, Huhhot 010018, China
2
College of Agronomy, Henan Agricultural University, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(7), 990; https://doi.org/10.3390/agriculture14070990
Submission received: 20 May 2024 / Revised: 20 June 2024 / Accepted: 21 June 2024 / Published: 25 June 2024
(This article belongs to the Section Digital Agriculture)

Abstract

:
The timely and accurate mapping of crop types is crucial for agricultural insurance, futures, and assessments of food security risks. However, crop mapping is currently focused on the post-harvest period, and less attention has been paid to early crop mapping. In this study, the feasibility of using Sentinel-1 (S1) and Sentinel-2 (S2) data for the earliest identifiable time (EIT) for major crops (sunflower, maize, spring wheat, and melon) was explored in the Hetao Irrigation District (HID) of China, based on the Google Earth Engine (GEE) platform. An early crop identification strategy based on the Random Forest (RF) model for HID was proposed, and the performance of the model transfer was evaluated. First, the median synthesis, linear shift interpolation, and the Savitzky–Golay (SG) filter methods were used to reconstruct the time series of S1 and S2. Subsequently, the sensitivity of different input features, time intervals, and data integration to different early crop identifications was evaluated based on the RF model. Finally, the model with optimal parameters was evaluated in terms of its transfer capacity and used for the early mapping of crops in the HID area. The results showed that the features extracted from S2 images synthesized at 10-day intervals performed well in obtaining crop EITs. Sunflower, maize, spring wheat, and melon could be identified 90, 90, 70, and 40 days earlier than the harvest date. The identification accuracy, measured by the F1-score, could reach 0.97, 0.95, 0.98, and 0.90, respectively. The performance of the model transfer is good, with the F1-score decreasing from 0 to 0.04 and no change in EIT for different crops. It was also found that the EIT of crops obtained using S1 data alone was 50–90 days later than that obtained using S2 data alone. Additionally, when S1 and S2 were used jointly, S1 data contributed little to early crop identification. This study highlights the potential of early crop mapping using satellite data, which provides a feasible solution for the early identification of crops in the HID area and valuable information for food security assurance in the region.

1. Introduction

With continued population and consumption growth, the timely and accurate mapping of crop types is fundamental for achieving sustainable agricultural development and improving food security [1,2]. In particular, early or in-season information on crop acreage and distribution will help in the timely detection of famines [3] and rapid response to floods [4], droughts [5], pests and diseases [6], and other potential risks to promote the effective management of water and fertilizer use in agriculture [7,8] and assist in predicting crop yields [9]. In addition, timely crop information can help agricultural insurance companies assess disaster losses and compensate farmers without relying on traditional, labor-intensive field visits [10,11]. Therefore, early crop mapping research is significant to meet the needs of diverse applications.
Early crop identification refers to the monitoring and identification of crops at the growth stage, i.e., before harvest, using various technological means, while maintaining an acceptable level of accuracy [12,13,14]. Satellite remote sensing is widely used in early crop identification because of its ability to repeatedly and consistently provide accurate and objective crop information in space and time [2,14,15,16,17,18]. With the development of remote sensing satellites, early crop mapping has shifted from Moderate Resolution Imaging Spectroradiometer (MODIS) data, which are mainly used for large-scale early crop mapping with a high degree of intensification, to fine crop mapping, which is dominated by satellites with high spatial resolution [2,14]. S1 and S2 were developed by the European Space Agency (ESA) with a higher spatial resolution (10 m), opening a new door for large-scale block-level early crop identification [12,18,19,20], especially in Asia, and particularly in South Asia and China, where field sizes are smaller than 5 hectares and crop species are diverse [18].
Significant progress has been made in early crop identification based on S1 or S2 data. Haseeb et al. [18] successfully achieved early crop identification in a small-scale farming system in Pakistan by combining S2 data and Long Short-Term Memory (LSTM), with major crops including rice, wheat, and sugarcane classified in the first four weeks after sowing with up to 93.77% accuracy. According to Subir et al. [21], based on multi-temporal S1 data and 2D Convolutional Neural Network (2D CNN), the early crop mapping of soybean, jowar, cotton, and sugarcane can be performed 45 days before harvest in central India. China is a large agricultural country with a wide variety and distribution of crops. Different scholars have developed a variety of methods for crops in different regions. Hao et al. [22] used Landsat and S2 data to improve the reference-based method (RBM) by Enhanced Vegetation Index (EVI)/Normalized Difference Vegetation Index (NDVI) to achieve the early crop identification of various crops such as cotton, spring maize, summer maize, and winter wheat in Hengshui, Hebei Province, China. Tian et al. [23] conducted an early crop identification study based on NDVI and Normalized Difference Yellow Index (NDYI) in Henan Province, China, using S2 data and a decision tree classification model, and the results showed that it was possible to distinguish between winter garlic and winter wheat four months before the harvest. In comparison, winter oilseed rape was able to be distinguished two months prior to harvest. Huang et al. [17], utilizing S2 data and an RF classifier, successfully obtained the distribution of winter wheat in Henan Province, China, five months before harvest based on selected spectral temporal information features. Wei et al. [12] explored the potential of early crop identification for maize, soybean, and rice at three locations in Heilongjiang province using S2 data and four machine learning classifiers, based on ten spectral bands and 16 spectral indices.
However, there are still some gaps that need to be filled and challenges that need to be solved. Firstly, most current studies mainly focus on grain crops in China’s northeast and central regions. The growth status of the same crop type may vary in different areas due to complex environmental and climatic conditions [24]. The characteristics of different early crop identification methods also vary. Therefore, studying the early identification of different crops in other regions is necessary [12], especially for economically significant crops such as rapeseed [25] and sunflower [26]. Secondly, relatively few studies on early crop identification use S1 data. The S1 has the advantage of penetrating the cloud layer, which can overcome the limitation of cloud coverage in optical remote sensing [27]. Hajar et al. [20] demonstrated that the combination of S1 and S2 can improve early crop identification, with a 2.94% increase in overall classification accuracy over S2 data alone. The efficacy of using S1 to identify early cropping in China needs to be further investigated. Thirdly, due to the delayed acquisition of crop samples, early crop identification without these samples is a challenge. The model transfer is a potential way to address this challenge. A significant amount of research focuses on using model transfer in image classification studies. However, there is limited research on the application of model transfer in the early identification of crops. Therefore, it is necessary to explore the potential of using model transfer for early crop identification in situations where sample acquisition is delayed or sample quantity is insufficient, to provide more robust support for early crop identification.
HID, located in the northwest of China, is one of China’s three major irrigation districts, with a diversified crop cultivation structure, including grain crops and cash crops [28,29]. Early crop mapping in this area is still lacking. In this study, based on the GEE platform, an early crop identification strategy of different crops (including sunflower, maize, spring wheat, and melon) in the HID area was explored, using RF classifiers with 2021 and 2022 S1 and S2 time-series imagery data. Specifically, we explored (1) the optimal input characteristics for the early identification of different crops in the HID area, (2) the EIT for different crops in the HID area, (3) the effectiveness of S1 and S2 data for early crop identification in HID area, and (4) the stability of model transfer for early crop identification in HID.

2. Study Area and Datasets

2.1. Study Area

HID is located in the southern part of Bayannur City, Inner Mongolia Autonomous Region, China, with a geographic location of longitude 106°20′ to 109°19′ east and latitude 40°19′ to 41°18′ north, and an existing irrigated area of 558,300 hectares. It is not only one of China’s largest irrigation districts but also the largest gravity-fed irrigation district in Asia. The geomorphology is complex, including water, trees, flooded vegetation, crops, built areas, bare ground, and rangeland (Figure 1) [30]. The region has a typical temperate continental climate with hot summers and cold, dry winters, with average annual temperatures ranging from 6–12 °C [31]. Precipitation is scarce, with annual rainfall not exceeding 250 mm [32]. The terrain is relatively flat, with altitudes ranging from 990 to 1060 m above sea level [33]. Sunflower, maize, spring wheat, and melon are the main crops in HID, which account for more than 95% of the total cultivated area in the irrigation district (source: https://tj.nmg.gov.cn/tjyw/ (accessed on 16 March 2024)). Therefore, the primary research subjects in this study used these four crops.
In HID, there are significant differences in the growing seasons of different crops. Specifically, the growing season for sunflowers usually starts in June and lasts until October. Spring wheat has an earlier growing season that begins in March and lasts until July. Meanwhile, the growing seasons for maize and melon crops cover May to October. Figure 2 shows the crop calendars for the main crops in the HID.

2.2. Datasets

2.2.1. Sentinel-2

The Sentinel-2 satellite system, consisting of two satellites, Sentinel-2A and Sentinel-2B, is a high-resolution Earth observation satellite developed by the ESA. The two satellites complement each other, shortening the revisit period from 10 to 5 days and significantly increasing the monitoring frequency. S2 carries advanced multispectral sensors covering 13 bands, including the visible, near-infrared, and short-wave infrared spectral ranges, with a spatial resolution of 10 to 60 m, capable of capturing subtle changes in the Earth’s surface and providing valuable data support for monitoring the Earth’s environment and resources [34].
Currently, the data products of S2 include the L1C level and the L2A level. L1C level data are atmospheric apparent reflectance products corrected by ortho- and geometrically refined corrections, while L2A level data are products obtained by ortho-correcting the reflectance of the atmospheric substratum. In this study, we used the S2 level 2A product provided by GEE.

2.2.2. Sentinel-1

The Sentinel-1 satellite system, consisting of two satellites, Sentinel-1A and Sentinel-1B, is one of the most widely used radar satellites today. The revisit period is 6 or 12 days [35]. The spatial resolution is 5–40 m [36]. The interferometric wide-scan (IW) mode is mainly used for ground-based observations and provides dual-polarized images, including vertical transmit, vertical receive (VV), and vertical transmit, horizontal receive (VH). VV and VH band images with 10 m resolutions have been widely used to classify crop types [35,37]. Therefore, the VV and VH bands with a 10 m spatial resolution from the Sentinel-1A/B Ground Range Detection (GRD) Level 1 product were used in this study [38]. These data have been available from the GEE since 2014 [39]. The S1 data provided by the GEE platform have been subjected to pre-processing steps, such as orbital correction, thermal noise removal, radiometric calibration, and terrain correction by the S1 Toolbox [35,36].

2.3. Reference Samples

The sample points used in this study, for 2021, were divided into two parts: samples collected in the field and samples obtained through visual interpretation. The Statistics Bureau of Inner Mongolia Autonomous Region provided the field-collected samples, totaling 155,678. Although the number of samples is large, they are unevenly distributed. To ensure that the sample points were uniformly distributed in the HID, we used the method proposed by Liu et al. [40] to divide the HID area into 46 hexagonal units of equal size. Subsequently, we utilized key fertility images of sunflower, maize, spring wheat, and melon to select more samples using a visual interpretation method. Specifically, images from 1 June 2021 to 15 June 2021 were selected for spring wheat, which corresponds to samples collected 45–60 days before harvest; images from 15 July 2021 to 30 July 2021 were chosen for sunflower and maize, corresponding to samples collected 60–75 days before harvest for sunflower and 70–85 days before harvest for maize; and images from 15 August 2021 to 30 August 2021 were selected for melon, corresponding to samples collected 5–15 days after harvest. Five hundred sample points were chosen for each crop type, evenly distributed across the hexagonal subzones, totaling 2000 samples. The sample distribution was specified as follows: sunflower, 11 field-collected samples and 489 visually interpreted samples; maize, 35 field-collected samples and 465 visually interpreted samples; spring wheat, 73 field-collected samples and 427 visually interpreted samples; and melon, 29 field-collected samples and 471 visually interpreted samples. This approach ensured that each selected sample was representative and controlled for spatial autocorrelation to some extent. Figure 3 illustrates the spatial distribution of samples in 2021.
The samples used for model transfer validation were obtained by the visual interpretation method, using images of the critical fertility period of each crop in 2022. Specifically, images from 1 June 2022 to 15 June 2022 were selected for spring wheat, 1 July 2022 to 30 July 2022 for sunflower and maize, and 15 August 2022 to 30 August 2022 for melon. The number of samples for each crop is 500. The spatial distribution of the 2022 samples is shown in Figure A1 in Appendix A.

2.4. Land Use and Land Cover Data

The global LULC map, based on 10 m resolution images from the ESA S2 satellite, was used to mask the non-cultivated areas (e.g., grasslands, trees, water, built areas, and other areas). The map was generated by Impact Observatory using a deep-learning land classification model. The model was developed by training on billions of manually labeled image pixels [30], ultimately generating a map containing nine different categories. The LULC data is available on the GEE platform. In this study, we selected the cropland layer in the LULC data as a mask. Then, we applied this created cropland mask image to the subsequent satellite data to focus only on cropland in the following analysis and research.

3. Methods

The flowchart of this study is shown in Figure 4. First, data processing and feature preparation are performed on S1 and S2 data. Subsequently, the sensitivity of different scenarios to different early crop identification is evaluated. Finally, the optimal parameters are utilized for early crop mapping. The details of these steps are described in the following sections.

3.1. Data Processing and Feature Preparation

3.1.1. Data Processing

(1)
Sentinel-2 data processing
Due to the difference of observation dates on different orbits and the effect of cloud cover, the spectral band values in the observed data may be significantly different. Additionally, since the spatial resolutions of the bands we used are different, resampling is necessary to standardize them. To address these issues, our data processing consists of three main steps: resampling, cloud removal, and generating images at standard time intervals.
The first step involved applying the nearest neighbor resampling method to resample Band 5 (RE1), Band 6 (RE2), Band 7 (RE3), Band 11 (SW1), and Band 12 (SW2) from a spatial resolution of 20 m to 10 m.
The second step is to remove clouds. Using the cloud probability map (“MSK_CLDPRB”), snow probability map (“MSK_SNWPRB”), and scene classification layer (SCL) information provided in the S2 data, we can effectively remove cloud, shadow, and snow pixels [13], thus reducing the interference of these factors on the data quality. Initially, we filtered images with cloud cover less than 30% using the “CLOUDY_PIXEL_PERCENTAGE” band. Then, we further refined cloud and snow detection by analyzing the “MSK_CLDPRB” and “MSK_SNWPRB” bands. Specifically, pixels with cloud probability (“MSK_CLDPRB”) less than 5% were considered clear of clouds, pixels with snow probability (“MSK_SNWPRB”) less than 5% were considered clear of snow, pixels with “SCL” equal to 3 were considered cloud shadows, and pixels with “SCL” equal to 10 were considered cirrus clouds. A mask was then created to remove cloud, snow, shadow, and cirrus pixels from the images. We conducted a quality assessment on the processed images to ensure the effectiveness of the cloud removal process and to verify data integrity.
In the third step, images are generated at standard time intervals. Due to cloud cover, the above features are discontinuous, which increases the difficulty of crop differentiation. To solve this problem, the median synthesis method [11], linear shift interpolation [41,42], and SG method [43] were used in this study. These methods help improve the data’s continuity when dealing with discontinuous features, thus better supporting subsequent crop classification and analysis.
(i)
Median synthesis was used to investigate the effect of different time intervals on early crop identification. In this paper, we synthesize the S2 feature time series with fixed time intervals of 10, 15, 20, and 30 days, covering a range from the DOY of 90 to the DOY of 300. The median synthesis method is easy to apply, and many previous studies have demonstrated its superiority over the mean synthesis method [1,41,44].
(ii)
Linear shift interpolation: We fill the gaps in the S2 composite time series by linear shift interpolation, which smoothes the time series of images and fills the gaps by synthetically replacing the target image with the median of three neighboring images [11,41]. This method is particularly suitable for constructing image sequences in near real-time. After analyzing the length of data gaps, we determine that the maximum missing data length is 30 days, so the window size is set to 30, and the specific formula is shown in Equation (1).
I n d e x i = I n d e x k + ( I n d e x j I n d e x k ) × ( d i d k ) d j d k
Indexi represents the existing gap index, with Indexk and Indexj denoting valid observations preceding and succeeding Indexi, respectively. dk, di and dj correspond to the day of the year for the kth, ith, and jth observations.
(iii)
SG filter. To further smooth the noise in the time series, we use an SG filter with polynomial order two and window size 3. These parameters balance the noise sensitivity and smoothing effect when processing time series data.
The following Figure 5 illustrates the processing steps of median synthesis, linear shifted interpolation, and SG filtering for Melon’s NDVI.
(2)
Sentinel-1 data processing
The S1 data provided by the GEE platform have completed the preprocessing steps, such as orbit correction, thermal noise removal, radiometric calibration, and terrain correction. To improve the data quality, we utilize the GEE platform to remove observations with higher incidence angles in the overlap region [38]. In addition, we used the Refined Lee filter to reduce the noise in VV and VH [45].
Following the same methodology described in the previous section, the median synthesis, interpolation, and linear filtering were implemented on the S1 data to generate time-series data for VV, VH, VH/VV, and VH-VV at fixed time intervals of 10, 15, 20, and 30 days, covering a range from the DOY of 90 to the DOY of 300.

3.1.2. Feature Preparation

(1)
Optical remote sensing features
(i)
Spectral bands. Among all 13 spectral bands, 9 spectral bands with a spatial resolution of 10 m and 20 m were used in this study as candidate features for crop identification, as shown in Table 1.
(ii)
Spectral indices. This study selected six spectral indices as candidate features for crop identification to improve the sensitivity of HID crop identification. NDVI is a widely used metric for identifying the distribution of crop types [46]; EVI has a solid ability to overcome the growing season saturation phenomenon [46]; LSWI is a good indicator of vegetation water content [47]; REP is a red-edge based index that is sensitive to crop conditions [48]; GCVI index has been shown to have a solid ability to separate crops [49]; SAVI is used to minimize soil background effects [46]. The specific equations used to calculate these spectral indices are shown in Table 2.
(2)
Microwave features
To investigate the sensitivity of S1 data to HID crop identification, two bands and two indices were selected as microwave candidate features for crop identification in this study. It was shown that VV polarization can effectively classify summer crops, such as maize, soybean, and sunflower at the topping/flowering stage of the crop [50]. The study also emphasized the utility and stability of the VH/VV ratio and the difference between VH and VV (VH-VV) as crop differentiation metrics [36], which may be more effective than using VV and VH alone [38]. Therefore, this paper selects VV, VH, VH/VV, and VH-VV as candidate characteristics for microwaves.

3.2. Scenario Construction

In this study, we designed 16 different schemes for early crop identification. These include input features (15 optical features, four microwave features, optically optimal features, and optically optimal features + microwave features) and time intervals (10 days, 15 days, 20 days, and 30 days), as shown in Table 3. By considering these scenarios, the study aims to explore the specific effects of different parameter configurations on early crop identification and to propose accurate and robust strategies for early crop identification in the HID area.
High importance features are used to calculate the importance of each feature in the correct prediction using the explained method of RF classifier in GEE, and all the features are ranked according to their importance scores. To ensure the interpretability of the results, the scores are normalized so that the total importance score of the features in each category is equal to 100. The top 10 most important features are selected for each category based on the scores from highest to lowest. This step ensures that the selected features not only have high predictive power but also can effectively reduce the number of features and improve the operational efficiency of the classification algorithm.

3.3. Early Crop Identification

3.3.1. Random Forest Classification

Random Forest (RF) is a powerful classification method based on decision trees, initially proposed by Breiman in 2001 [51]. It achieves ensemble learning by combining multiple decision trees into a unified model. RF classifiers are less sensitive to sample quantity, quality, and category imbalance, making them suitable for analyzing remote sensing data. Compared with traditional classifiers, RF classifiers provide faster and more reliable classification results without significantly increasing the computational complexity [52,53,54]. Therefore, RF classifiers, which are widely used in crop identification [33,35], land cover classification [55], and other fields, are widely used.
To ensure a better generalization of individual decision trees and to prevent overfitting, we set the Min Leaf Population (MLP) of terminal nodes in GEE to 10. Considering the balance between accuracy and computation time, we put the Number Of Trees to 100 [56]. The other four parameters, Variables Per Split, Bag Fraction per tree, whether the classifier should be run in Out of Bag Mode, and Random Seed, are set by default in GEE. The default value of Variables Per Split is the square root of the number of features, as recommended by much of the literature [11,56], and is effective in avoiding overfitting.

3.3.2. Precision Assessment Methods

Confusion matrices were used to assess the accuracy of crop type classification, including user accuracy (UA), producer accuracy (PA), overall accuracy (OA), and F1-score [40,57]. The detailed formulas are given below.
U A = X i j k = 1 n X k j × 100 %
P A = X i j k = 1 n X i k × 100 %
O A = i = 1 n X i i i = 1 n j = 1 n X i j × 100 %
F 1 s c o r e = U A × P A U A + P A × 2
Xij refers to the value at the intersection of row i and column j in the confusion matrix, where Ai represents the ith classification class. To clarify, Xij indicates the count of instances originally assigned to category Ai but incorrectly classified into category Aj.
When establishing the HID early crop identification strategy, 70% of the total samples for each crop category in 2021 are randomly selected for training, with the remaining 30% reserved for assessing classification accuracy.

3.3.3. Determination of the Earliest Identifiable Time for Crops

In this study, the EIT for sunflower, maize, spring wheat, and melon was determined using corresponding binary classifiers, i.e., sunflower and non-sunflower classifiers, maize and non-maize classifiers, spring wheat, and non-spring wheat classifiers, and melon and non-melon classifiers. This approach allows for optimizing the classification models individually for each crop type, thus improving the accuracy of early identification.
The image start date is fixed from the beginning of April to the end of October, i.e., the DOY of 90 to 300. This period captures the growth stages of the various HID crops, with early April representing the beginning of the spring wheat growing season and field preparation for other crops. Harvest dates gradually increase for each crop from July through October.
The research found that with the increase in the length of the time series, the classification accuracy also shows a synchronous upward trend [17]. To balance the relationship between early identification and classification accuracy, the EIT of each crop was defined as the point in time when its F1-score first reached 0.9. This approach not only takes into account the growth characteristics of different crops, but also makes full use of the length of the time series data to ensure that crop types are identified at the earliest possible stage with high accuracy, thus providing timely and practical information to support crop management and monitoring.

3.3.4. Model Transfer

To further validate the stability of the model, we utilize the proposed HID early crop identification strategy to train the RF-based classifier, based on the early crop image time series collected in 2021 (70% of the total samples). Then, the 2000 samples acquired in 2022 were used to validate the model performance.

4. Results

4.1. Optimal Input Characterization

This section focuses on analyzing the best input features for early crop identification. Regarding feature selection, we investigate two scenarios: (1) Scenes SI, SV, SIX, and SXIII use optical features. (2) Scenes SIV, SVIII, SXII, and SXVI use optically optimal features.
Table 4 shows the optically optimal features for different scenarios and their total importance score values. We can see that for different crops, the cumulative importance scores of the top 10 features are more than 75, indicating that these features contribute decisively to the classification results. The table shows that the optimal features RE2, RE1, NIR, RE3, and REP were utilized for sunflowers at different time intervals. The optimal maize features used at different time intervals were RE1, SW1, R, SW2, G, REP, and GCVI. For the spring wheat, the optimal features used at different time intervals were NDVI, EVI, SAVI, GCVI, and LSWI. As for melon, the optimal features used at different time intervals were REP, RE2, EVI, and RE3.
Figure 6 shows the variation of the F1-score for sunflower, maize, spring wheat, and melon in different scenarios. F1-score is the average of 10 repetitions of the model to reduce the uncertainty caused by randomness. From the figure, we can see the following: (1) The EIT analysis: In the 10-day time series (SI and SIV), for melon, EIT identification based on the optical features (SI) is identifiable on the DOY of 180, while that based on optically optimal features (SIV) is identifiable on the DOY of 170, ten days earlier than that based on the optical features. The EIT for sunflower, maize, and spring wheat remains the same. In the other cases (15 days, 20 days, and 30 days), the EIT was the same for sunflower, maize, spring wheat, and melon. (2) Accuracy analysis at the EIT: In the 10-day time intervals (SI and SIV), for sunflower identification, F1-score based on optically optimal features (SIV) is 0.006 higher than that based on optical features (SI); for maize identification, F1-score based on optically optimal features (SIV) is 0.015 higher than that based on optical features (SI); and for spring wheat identification, the F1-score based on optically optimal features (SIV), and the F1-score and optical features for the (SI)-based data were not much different. In the 15-day time series (SV and SVIII), 20-day time series (SIX and SXII), and 30-day time series (SXIII and SXVI), there was no significant difference between F1-score based on optically optimal features and that based on optical features. (3) Analysis of the highest F1-score values: the highest F1-score values for sunflower, maize, spring wheat, and melon did not differ significantly between optically optimal features and optical features across different time intervals (10 days, 15 days, 20 days, and 30 days).
Summarizing the above analysis results, we can conclude that optical-based optimal features perform better than optical-based features in early crop identification. Optimal features shorten the time for early crop identification and improve the accuracy and timeliness of crop extraction. This is consistent with the results of previous studies [47,58,59], further confirming the critical role of optimal features in improving crop identification performance.

4.2. Optimal Time Intervals

This section focuses on analyzing the optimal time interval for early crop identification. Based on different time intervals, we categorize the scenarios into four categories: (1) Scenarios SI, SII, SIII, and SIV, with a time interval of 10 days. (2) Scenarios SV, SVI, SVII, and SVIII, with a time interval of 15 days. (3) Scenarios SIX, SX, SXI, and SXII, with a time interval of 20 days. (4) Scenarios SXIII, SXIV, SXV, and SXVI, with a time interval of 30 days. In the previous section, we have concluded that the performance of optical-based optimal features is better than optical-based features. Therefore, when analyzing the effects of different time intervals, we selected optimal features as input features for comparison, namely, the 10 days, 15 days, 20 days, and 30 days intervals were chosen to analyze using the SIV, SVIII, SXII, and SXVI schemes, respectively.
Figure 6 illustrates the F1-score changes in sunflower, maize, spring wheat, and melon for the four different time interval scenarios. (1) From the EIT analysis: The 10-day time series achieves the EIT, and the crop can be identified as early as 60 days earlier than other intervals. For example, for sunflower identification, the EITs for SIV, SVIII, SXII, and SXVI were the DOY of 160, 165, 170, and 180, respectively. Due to the moving median synthesis, the actual end date of the image time series is delayed, and the actual identifiable timings were the DOY of 170, 180, 190, and 210, respectively. The EIT at 10-day intervals is 10 days, 20 days, and 40 days earlier than at the 15-day, 20-day, and 30-day intervals, respectively. Similarly, the EIT at 10-day intervals for maize is 15 days, 30 days, and 60 days earlier than that at the 15-day, 20-day, and 30-day intervals, respectively; for spring wheat at 10-day intervals, the EIT is 20 days, 20 days, and 50 days earlier than that at the 15-day, 20-day, and 30-day intervals, respectively; and for melon at 10-day intervals, the EIT is 15 days, 30 days, and 60 days earlier than that at the 15-day, 20-day, and 30-day intervals, respectively. (2) From the highest F1-score analysis: the highest F1-score of the four scenarios, SIV, SVIII, SXII, and SXVI, did not differ much, so the effect of time interval on classification accuracy is relatively limited.
In summary, this study revealed the significant effect of different time intervals on the effectiveness of early crop identification. Based on the comparative analysis, we determined that the 10-day time series provided the earliest EIT, confirming it as the best choice for early crop identification. As the time interval increased, the EIT was delayed, which is consistent with You and Dong’s (2020) [11] and Huang’s (2022) [17] findings. In addition, the study by Battsetseg Tuvdendorj (2022) further supports our conclusions. In that study, wheat extraction in northern Mongolia, by reconstructing S1 and S2 images at 10-day intervals, was significantly better regarding crop classification accuracy than images synthesized based on the percentile approach [35]. This finding further confirms the validity and reliability of using a 10-day interval time series in accurate crop identification. Therefore, we recommend using a 10-day observation interval in early crop identification and classification tasks for optimal identification and accuracy.

4.3. Optimal Data Combination

Based on the results of optical features and optical time intervals, three scenarios (SII, SIII, and SIV) were analyzed to explore the relative role of S1 in early crop identification.
From Figure 6, we can see that (1) using S1 data alone (SII): The EIT of sunflower, maize, and spring wheat can be obtained. The EITs were the DOY of 230, 220, and 170, respectively. The actual identified time was the DOY of 240, 230, and 180, respectively. From the melon data, early crop identification cannot be achieved. The EIT was the DOY of 260, and it was actually identified on the DOY of 270; the melon has been harvested at this time. However, compared to using S2 data alone (SIV), using S1 data alone (SII) resulted in an EIT 70, 50, 50, and 90 days later, respectively, and the identification accuracy was significantly lower than using S2 data alone (SIV). (2) When using S1 and S2 data jointly (SIII), the EIT of each crop did not change. Early crop identification was possible for sunflower, maize, spring wheat, and melon on the DOY of 160, 170, 120, and 170, respectively, with actual end times on the DOY of 170, 180, 130, and 180, respectively. Compared to using S2 data alone (SIII), the EIT remains unchanged. (3) When using S1 and S2 data jointly (SIII), there was no significant improvement in the accuracy of early identification F1-score.
The analysis results of this study revealed that in the HID region, the EIT using only S1 data is delayed by 50 to 90 days compared to using only S2 data. Furthermore, the contribution of S1 data to early crop identification is relatively minor when used in conjunction with S2 data, indicating that the contribution of S1 data to early crop mapping is very limited.
Many previous studies have shown that combining optical and SAR data can distinguish crops better than optical data alone [15,60,61,62]. For example, the survey by Adrian et al. (2021) classified multi-temporal S1 radar and S2 optical images by combining their textural and spectral features [60], which found that crop-type mapping from fused data showed a significant improvement in overall accuracy compared to using radar or optical data alone.
However, our study found that the C-band synthetic aperture radar S1 data had less impact on early crop identification in the HID area. This may be due to several factors: first, during the early growth stages of crops, plants are typically shorter and less covered, resulting in a weaker signal for radar waves to penetrate the crop cover and be reflected, thus affecting the accuracy of crop type identification. This is particularly true for shorter crops such as spring wheat and melon, which may not have enough structural features in their early growth stages to produce a distinct signature signal in radar data. Furthermore, although S1 provides higher spatiotemporal resolution, its sensitivity may be insufficient for rapidly growing crops, such as maize and sunflower. Therefore, we infer that in the HID region, the structural complexity or biomass of these crops during early growth stages may not generate sufficient changes in C-band radar reflection signals, making accurate early identification challenging.

4.4. Hetao Irrigation District’s Earliest Identifiable Time for Crops

After comparing 16 scenarios, the strategy of synthetic S2 images (SIV) with optimal features in 10-day intervals was chosen to determine the final EIT and perform refined early crop mapping in the HID area.
Based on the SIV, the EIT of sunflower, maize, spring wheat, and melon is the DOY of 170, 180, 130, and 180, respectively. The times correspond to the seedling, nodulation, tillering, and fruit development stages of each crop, which are 90, 90, 70, and 40 days earlier than the maturity stage.
The method proposed in this study demonstrated significant advantages in early crop identification, particularly in terms of identification time and accuracy. Our strategy significantly advances the early crop identification timings compared to previous studies. For example, Li et al. (2022) identified the critical periods for early identification of sunflower, maize, spring wheat, and melon in the HID region as early July, late June, mid-May, and late June, respectively, using the S2 time series images and deep learning techniques [16]. In comparison, our method identified sunflower and spring wheat crops 20 and 10 days earlier, respectively. Additionally, our method shows a significant improvement in accuracy. For example, Hu (2022) et al. used S1 and S2 data to generate classification metrics, using combined percentile and monthly composite methods, as well as RF classifiers to identify crop types for HID, with F1-scores of 0.91, 0.85, and 0.99 for sunflower, maize, and spring wheat, respectively [33]. In our study, the F1-score for all three crops reached 0.99, which implies an improvement in the identification accuracy of 0.08 and 0.14 for sunflower and maize, respectively. Furthermore, compared to Li et al.‘s (2022) study [16], our method improved the accuracy of sunflower, maize, spring wheat, and melon by 0.21, 0.26, 0.30, and 0.28, respectively, which reflects a significant accuracy improvement.

4.5. Model Transfer

To further validate the stability of our proposed HID early crop identification strategy in terms of model transfer, we used all samples from 2022 to validate the accuracy of the model, trained based on the time series of early crop images collected in 2021 and 70% of the total samples in 2021. Figure 7 shows the performance of model transfer for each crop.
From Figure 7, we can see that the EITs of sunflower, maize, spring wheat, and melon were the DOY of 160, 170, 120, and 170, respectively. The actual end times were the DOY of 170, 180, 130, and 180, respectively. These dates corresponded to the seedling stage of sunflower, the tillering stage of maize, the fruit development stage of melon, the tillering stage of spring wheat, and the fruit development stage of melon.
From the EIT analysis, the EIT of each crop that was transferred through the model did not change. The F1-score of maize, spring wheat, and melon decreased by 0.04, 0.02, and 0.01, respectively, while the F1-score of sunflower did not change. As for the highest identification accuracy, the accuracy of each crop decreased, especially melon accuracy, which decreased more significantly, with its stable F1-score score decreasing by 0.05–0.06.
This further demonstrates the excellence of our proposed early crop identification strategy and validates the model transfer’s stability. This result provides a reliable and feasible solution for crop identification and planting management, demonstrating its potential for further development and application.

4.6. Early Season Crop Map for Hetao Irrigation District

Based on the proposed HID early crop identification strategy, we utilize remote sensing data from the sunflower seedling stage, maize jointing stage, spring wheat tillering stage, and the fruit development stage of melon in 2021 and 2022 to generate binary crop maps (sunflower, non-sunflower; maize, non-maize; spring wheat, non-spring wheat; melon, non-melon) for the four main crops in HID. Figure 8a–h visually depict the planting distribution of each crop. After statistical analysis, the crop planting areas and ratios for sunflower, maize, spring wheat, and melon in 2021 and 2022 are presented in Table 5.
As can be seen in Figure 8 and Table 5, sunflower was the most important crop grown in HID in 2021 and 2022, followed by maize, spring wheat, and melon. Within these two years, the area planted for each crop was relatively stable. Sunflower saw a slight increase in planted area by 13.3 thousand hectares, while maize, spring wheat, and melon all decreased slightly by 0.75 thousand hectares, 0.15 thousand hectares, and 0.24 thousand hectares, respectively. Further observation of the main planting locations of each crop revealed that the statistical curves of the latitude and longitude areas of sunflower and maize were more similar and less changed in adjacent years. Sunflower was mainly concentrated in the areas of 107° and 108°40′ longitude and 41° latitude, while maize was distributed primarily in the areas of 107°10′ longitude, 40°40′ latitude, and 41°15′ latitude, indicating relatively stable planting areas.
However, spring wheat and melon had more variable statistical curves regarding latitude and longitude area. Spring wheat in 2021 was concentrated at 107°10′ longitude, while spring wheat in 2022 was focused on the regions near 106°30′ and 106°55′ longitude. Melons are near 108°35′ and are significantly less in 2022 than in 2021. Changes in the planting distribution of spring wheat and melon may be related to factors such as HID’s planting strategy and crop rotation.
In Figure 8, we further reveal the concentration of each crop by labeling specific areas. Specifically, the main planting areas of sunflowers are labeled as A and B, the intensive planting areas of maize are labeled as C and D, the concentrated areas of spring wheat are labeled as E and F, and the main planting areas of melon are labeled as G and H. To further refine the characteristics of these concentrated planting areas, Figure 9 provides even more exhaustive information, and details of the HID for the year 2022. It can be seen from Figure 9 that the concentrated planting areas of sunflower, maize, and spring wheat exhibit large-scale cultivation characterized by a large continuous distribution. In contrast, melon shows distributed planting characteristics with sporadic distribution, primarily by small farmers. These detailed views provide valuable data support for agricultural planning and resource management, and deepen our understanding of the spatial distribution pattern of crop cultivation in the HID area.

5. Discussion

5.1. Effect of Spectral Properties

This paper further investigates the impact of optical and polarization features on crop separability. We randomly selected 100 samples from each category of sunflower, maize, spring wheat, and melon from the data of 500 samples, each in 2021, for optical feature curve analysis. Figure 10 shows the spectral bands B, G, R, RE1, RE2, RE3, NIR, SW1, and SW2, as well as the spectral indices NDVI, EVI, LSWI, REP, GCVI, and SAVI curves at 10-day intervals for sunflower, maize, spring wheat, and melon.
Figure 10 shows that the peak growth periods of sunflower, maize, spring wheat, and melon are the DOY of 220 (early August), the DOY of 200 (mid-July), the DOY of 160 (early June), and day 200 (mid-July) of each year, respectively. In the separability analysis of sunflower, G, RE1, RE2, and NIR can separate sunflower and non-sunflower well, while the LSWI and SW1 curves are similar and difficult to separate. G, RE1, RE2, and NIR are typically effective in distinguishing sunflowers from other vegetation in remote sensing images, because they reflect the unique leaf structure and physiological status of sunflowers. Conversely, LSWI and SW1 may be less effective in this regard, as they are more influenced by soil and moisture levels, factors that have minimal impact on the characteristics of sunflower leaves [63]. In the separability analysis of maize, G, RE1, SW1, and GCVI can separate maize and non-maize well, while the NIR and NDVI curves are similar and difficult to separate. GCVI can provide information about the leaf surface area of vegetation, and maize, being a plant with high leaf area, may exhibit distinct features in the GCVI index [64,65]. In the separability analysis of spring wheat, NDVI, EVI, LSWI, GCVI, and SAVI can separate spring wheat and non-spring wheat well. Spring wheat, being a green vegetation, typically exhibits higher NDVI and EVI values during the growing season, and these indices have been widely utilized for discriminating between wheat and non-wheat areas [66,67,68,69], aligning with our findings. At the same time, RE2 and SW1 curves are similar and difficult to separate. In the separability analysis of melon, RE2, RE3, NIR, and EVI can separate melon and non-melon well, while RE1 curves are similar and difficult to separate. RE2, RE3, and NIR can effectively distinguish melon from other crops because they capture the unique spectral characteristics of melon in the red-edge and near-infrared regions. However, the RE1 band, due to its relatively similar spectral characteristics, cannot significantly distinguish melon from other crops [16].
In addition, we also randomly selected 100 samples from each category of sunflower, maize, spring wheat, and melon from the data of 500 samples, each in 2021, for the VV, VH, VV-VH, and VV/VH feature curve analysis. Figure 11 shows the polarization features VV, VH, and their combinations (i.e., VH/VV and VV-VH) at the 10-day intervals for sunflower, maize, spring wheat, and melon.
Figure 11 shows that the backward scattering coefficient of two-band cross-polarization (VH) is lower than that of single-band co-polarization (VV). The separability analysis for different crops is as follows: The separability analysis for sunflowers shows that VH and VV-VH have the potential to distinguish sunflower from non-sunflower. The separability analysis for maize showed that VV/VH has the potential to distinguish maize from non-maize. The separability analysis of spring wheat showed that VH has the potential to distinguish spring wheat from non-spring wheat. The separability analysis of melon showed that VH has the potential to differentiate between melon and non-melon.
These findings highlight the importance of spectral and polarization features in crop classification, and their respective strengths and limitations. By analyzing these features in-depth, researchers can improve the accuracy of crop identification and monitoring, which in turn supports agricultural development and resource management in HID areas.

5.2. Key Identifying Features

In this study, we delved into the key identification features of different crops in the HID area through the explained method of the RF classifier on the GEE platform. The analysis in Figure 12 reveals the following key findings:
Based on the contribution rate ranking in sunflower identification, the top ten cumulative contribution features are RE2, G, R, B, NIR, EVI, RE3, SAVI, RE1, and REP, accounting for 12.24% to 7.15% each. RE2, G, and R play significant roles in sunflower classification. At the DOY of 180, ending at the DOY of 190, it has a crucial impact on sunflower classification, at which point sunflowers are in the late germination stage. In maize identification, based on the contribution rate ranking, the top ten cumulative contribution features are G, RE1, SW2, GCVI, SW1, LSWI, SAVI, R, REP, and RE2, accounting for 20.73% to 3.8% each. G, RE1, and SW2 play significant roles in maize classification. At the DOY of 210, ending at the DOY of 220, it has the most significant impact on maize classification, at which point maize is in the heading stage. In spring wheat identification, based on the contribution rate ranking, the top ten cumulative contribution features are GCVI, NDVI, EVI, LSWI, SAVI, G, RE3, NIR, R, and RE1, accounting for 16.49% to 4.64% each. GCVI, NDVI, and EVI are crucial for spring wheat classification. At the DOY of 160, ending at the DOY of 170, it plays a critical role in spring wheat classification, at which point spring wheat is in the flowering stage. In melon identification, based on the contribution rate ranking, the top ten cumulative contribution features are RE2, RE3, EVI, SAVI, G, REP, SW1, NIR, NDVI, and B, accounting for 14.93% to 7.36% each. RE2, RE3, and EVI play significant roles in melon classification. At the DOY of 280, ending at the DOY of 190, it has the most significant impact on melon classification, at which point melons are in the fruit development stage.
These results reveal which specific features are most critical in identifying each crop, providing valuable guidance for optimizing remote sensing data processing and improving crop classification accuracy [35]. A deeper understanding of these key classification features can help us utilize remote sensing data more effectively for early crop identification, and thus improve agricultural monitoring and management in HID areas [33].

5.3. Research Uncertainty

In the current study, we used a pixel-based method to classify crops. However, spatial information was not fully utilized, which may lead to less-than-ideal classification results; this is called “salt-and-pepper”. We recommend an object-based approach to address this limitation and improve mapping accuracy. This approach can integrate features from neighboring pixels, reducing the uncertainties associated with pixel-based classification (Yin et al., 2018) [70]. Traditional machine learning algorithms, especially the RF classifier, largely depend on sample data. Factors such as sample quality, spatial distribution, and the number of samples in each class can affect the uncertainty of classification. In contrast, deep learning methods have convolutional characteristics that effectively integrate features from neighboring pixels, offering the vast potential to improve objectivity and classification precision. In our future research, we intend to explore the application of deep learning methods in early crop identification. By leveraging these technologies, we aim to enhance the objectivity and accuracy of classification, thereby improving our understanding of crop dynamics and enhancing the agricultural decision-making process.
It is also important to highlight that while the model adjusted to early crop separation shows promising results, its applicability in other regions and years depends on maintaining similar production characteristics [71,72,73]. In regions where crops are not irrigated but are rainfed and depend exclusively on rainfall, the phenological behavior of crops and the management decisions of farmers can vary significantly due to climatic variability [74]. These variations can alter the spectral and phenological signatures of the crops [75], posing challenges for the model’s effectiveness in different temporal and spatial contexts. For instance, rainfed crops may experience more pronounced fluctuations in growth patterns and stress responses, which are reflected in their spectral data [76]. This variability makes it more challenging to calibrate early detection models for consistent application across varying temporal and spatial contexts. Therefore, while the model has strong potential, the consideration of local climatic conditions and adaptive strategies to address these dynamic changes is essential. This underscores the necessity for continuous model refinement and validation to ensure its robustness and reliability in diverse agricultural environments.
In addition, only six commonly used spectral indices were selected for this study: NDVI, EVI, LSWI, REP, GCVI, and SAVI. This choice was made mainly because they are widely used and validated in the literature and practice. However, considering that many existing spectral indices have their own characteristics and application contexts, our choice may have limitations. In particular, it has been studied [11] that NDVI and GCVI exhibit high efficiency in tracking crop life cycles [77,78], and these two indices are highly correlated with EVI. Similarly, NDWI and NDSI share similar information characteristics with LSWI [2]. This similarity between indices may lead to information redundancy. In addition, some studies indicate that indices such as the Normalized Difference Water Index (NDWI), Land Surface Temperature (LST), Improved Built-up Index (IBI), and Normalized Difference Built-up Index (NDBI) also have a significant impact on early crop identification [79,80]. Therefore, it is necessary to consider and test more indices in future studies for early crop identification.
While our current research focuses on specific regions, expanding the study area to a larger provincial and national scale in future studies will have multiple benefits. This provides a broader understanding of crop dynamics, provides more reliable information for the decision-making process, and contributes to the advancement of agricultural science. By expanding the scope of our research, we can gain more comprehensive and in-depth insights, ensuring that our findings are more generalizable and applicable to different agricultural environments.

6. Conclusions

This study utilized all available Sentinel-1/2 time series images in 2021 available on the GEE platform to implement 16 different schemes for studying the EIT of four crops (sunflower, maize, spring wheat, and melon) over the HID region. Through a comprehensive comparison of various schemes, the following conclusions are drawn. The strategy of synthesizing S2 images with feature selection at 10-day intervals can effectively identify early crops in the HID area. SAR imagery has limited effectiveness in early crop identification in the HID region. The EIT for sunflower, maize, spring wheat, and melon corresponds to the seedling stage, elongation stage, tillering stage, and fruit development stage. The model transfer showed that this study’s early crop identification strategy demonstrated good stability, with F1-scores decreasing in the range of 0–0.04.
Determining early EIT is of significant importance for agricultural insurance, futures trading, and policy-making, as well as being crucial for predicting grain production and ensuring food security. These good results encourage us to apply our method to obtain a long-term EIT dataset for the HID region and analyze its changes and driver factors. Furthermore, future work will aim to improve the robustness of the method for different areas and crops, and to explore the potential application of emerging technologies, such as deep learning, in early crop identification.

Author Contributions

J.L. (Jiansong Luo): Conceptualization, Methodology, Software, Writing—original draft. M.X.: Methodology, Writing—original draft. Q.W.: Methodology, Investigation. J.L. (Jun Luo): Software, Investigation. Q.G.: Methodology, Investigation. X.S.: Investigation. Y.Z.: Writing—review and editing, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Inner Mongolia ‘Science and Technology Action Key Specialties’ (NMKJXM202201-4), the Inner Mongolia Basic Research Operating Expenses Colleges and Universities Program (Inner Mongolia Agricultural University Cross Discipline Fund Program, BR231509).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data preprocessing and early crop identification scripts are openly available from: https://github.com/imaujiansongluo/HID_Early-crop-identification (accessed on 6 June 2024).

Acknowledgments

We are particularly grateful to the Google team for building the powerful Google Earth Engine (GEE) platform and providing free access and services to help us realize our crop early identification study. Additionally, we acknowledge the support from the Inner Mongolia Autonomous Region Wheat Industry Technology Innovation and Promotion System Project for their contribution to this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Overview of the distribution of HID sample sites in 2022.
Figure A1. Overview of the distribution of HID sample sites in 2022.
Agriculture 14 00990 g0a1

References

  1. Teluguntla, P.; Thenkabail, P.S.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  2. Dong, J.; Fu, Y.; Wang, J.; Tian, H.; Fu, S.; Niu, Z.; Han, W.; Zheng, Y.; Huang, J.; Yuan, W. Early-season mapping of winter wheat in China based on Landsat and Sentinel images. Earth Syst. Sci. Data 2020, 12, 3081–3095. [Google Scholar] [CrossRef]
  3. Thenkabail, P.S.; Hanjra, M.A.; Dheeravath, V.; Gumma, M.K. A Holistic View of Global Croplands and Their Water Use for Ensuring Global Food Security in the 21st Century through Advanced Remote Sensing and Non-remote Sensing Approaches. Remote Sens. 2010, 2, 211–261. [Google Scholar] [CrossRef]
  4. Yi, Z.; Jia, L.; Chen, Q.; Jiang, M.; Zhou, D.; Zeng, Y. Early-Season Crop Identification in the Shiyang River Basin Using a Deep Learning Algorithm and Time-Series Sentinel-2 Data. Remote Sens. 2022, 14, 5625. [Google Scholar] [CrossRef]
  5. Skakun, S.; Kussul, N.; Shelestov, A.; Kussul, O. The use of satellite data for agriculture drought risk quantification in Ukraine. Geomat. Nat. Hazards Risk 2015, 7, 901–917. [Google Scholar] [CrossRef]
  6. Zhang, J.; Huang, Y.; Pu, R.; Gonzalez-Moreno, P.; Yuan, L.; Wu, K.; Huang, W. Monitoring plant diseases and pests through remote sensing technology: A review. Comput. Electron. Agric. 2019, 165, 104943. [Google Scholar] [CrossRef]
  7. Wardlow, B.D.; Callahan, K. A multi-scale accuracy assessment of the MODIS irrigated agriculture data-set (MIrAD) for the state of Nebraska, USA. Giscience Remote Sens. 2014, 51, 575–592. [Google Scholar] [CrossRef]
  8. Ozdogan, M.; Yang, Y.; Allez, G.; Cervantes, C. Remote Sensing of Irrigated Agriculture: Opportunities and Challenges. Remote Sens. 2010, 2, 2274–2304. [Google Scholar] [CrossRef]
  9. Karthikeyan, L.; Chawla, I.; Mishra, A.K. A review of remote sensing applications in agriculture for food security: Crop growth and yield, irrigation, and crop losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
  10. Dell’acqua, F.; Iannelli, G.C.; Torres, M.A.; Martina, M.L. A Novel Strategy for Very-Large-Scale Cash-Crop Mapping in the Context of Weather-Related Risk Assessment, Combining Global Satellite Multispectral Datasets, Environmental Constraints, and In Situ Acquisition of Geospatial Data. Sensors 2018, 18, 591. [Google Scholar] [CrossRef]
  11. You, N.; Dong, J. Examining earliest identifiable timing of crops using all available Sentinel 1/2 imagery and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2020, 161, 109–123. [Google Scholar] [CrossRef]
  12. Wei, M.; Wang, H.; Zhang, Y.; Li, Q.; Du, X.; Shi, G.; Ren, Y. Investigating the Potential of Sentinel-2 MSI in Early Crop Identification in Northeast China. Remote Sens. 2022, 14, 1928. [Google Scholar] [CrossRef]
  13. You, N.; Dong, J.; Li, J.; Huang, J.; Jin, Z. Rapid early-season maize mapping without crop labels. Remote Sens. Environ. 2023, 290, 113496. [Google Scholar] [CrossRef]
  14. Vorobiova, N.S.; Chernov, A.V. Curve fitting of MODIS NDVI time series in the task of early crops identification by satellite images. Procedia Eng. 2017, 201, 184–195. [Google Scholar] [CrossRef]
  15. Jordi, I.; Arthur, V.; Marcela, A.; Claire, M.-S. Improved Early Crop Type Identification By Joint Use of High Temporal Resolution SAR And Optical Image Time Series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef]
  16. Li, G.; Cui, J.; Han, W.; Zhang, H.; Huang, S.; Chen, H.; Ao, J. Crop type mapping using time-series Sentinel-2 imagery and U-Net in early growth periods in the Hetao irrigation district in China. Comput. Electron. Agric. 2022, 203, 107478. [Google Scholar] [CrossRef]
  17. Huang, X.; Huang, J.; Li, X.; Shen, Q.; Chen, Z. Early mapping of winter wheat in Henan province of China using time series of Sentinel-2 data. GIScience Remote Sens. 2022, 59, 1534–1549. [Google Scholar] [CrossRef]
  18. Khan, H.R.; Gillani, Z.; Jamal, M.H.; Athar, A.; Chaudhry, M.T.; Chao, H.; He, Y.; Chen, M. Early Identification of Crop Type for Smallholder Farming Systems Using Deep Learning on Time-Series Sentinel-2 Imagery. Sensors 2023, 23, 1779. [Google Scholar] [CrossRef]
  19. Valero, S.; Arnaud, L.; Planells, M.; Ceschia, E. Synergy of Sentinel-1 and Sentinel-2 Imagery for Early Seasonal Agricultural Crop Mapping. Remote Sens. 2021, 13, 4891. [Google Scholar] [CrossRef]
  20. El Imanni, H.S.; El Harti, A.; Hssaisoune, M.; Velastegui-Montoya, A.; Elbouzidi, A.; Addi, M.; El Iysaouy, L.; El Hachimi, J. Rapid and Automated Approach for Early Crop Mapping Using Sentinel-1 and Sentinel-2 on Google Earth Engine; A Case of a Highly Heterogeneous and Fragmented Agricultural Region. J. Imaging 2022, 8, 316. [Google Scholar] [CrossRef]
  21. Paul, S.; Kumari, M.; Murthy, C.S.; Kumar, D.N. Generating pre-harvest crop maps by applying convolutional neural network on multi-temporal Sentinel-1 data. Int. J. Remote Sens. 2022, 43, 6078–6101. [Google Scholar] [CrossRef]
  22. Hao, P.-Y.; Tang, H.-J.; Chen, Z.-X.; Meng, Q.-Y.; Kang, Y.-P. Early-season crop type mapping using 30-m reference time series. J. Integr. Agric. 2020, 19, 1897–1911. [Google Scholar] [CrossRef]
  23. Tian, H.; Wang, Y.; Chen, T.; Zhang, L.; Qin, Y. Early-Season Mapping of Winter Crops Using Sentinel-2 Optical Imagery. Remote Sens. 2021, 13, 3822. [Google Scholar] [CrossRef]
  24. Li, G.; Han, W.; Dong, Y.; Zhai, X.; Huang, S.; Ma, W.; Cui, X.; Wang, Y. Multi-Year Crop Type Mapping Using Sentinel-2 Imagery and Deep Semantic Segmentation Algorithm in the Hetao Irrigation District in China. Remote Sens. 2023, 15, 875. [Google Scholar] [CrossRef]
  25. Domínguez, J.; Kumhálová, J.; Novák, P. Winter oilseed rape and winter wheat growth prediction using remote sensing methods. Plant Soil Environ. 2015, 61, 410–416. [Google Scholar] [CrossRef]
  26. Narin, O.G.; Abdikan, S. Monitoring of phenological stage and yield estimation of sunflower plant using Sentinel-2 satellite images. Geocarto Int. 2020, 37, 1378–1392. [Google Scholar] [CrossRef]
  27. Qiu, B.; Hu, X.; Yang, P.; Tang, Z.; Wu, W.; Li, Z. A robust approach for large-scale cropping intensity mapping in smallholder farms from vegetation, brownness indices and SAR time series. ISPRS J. Photogramm. Remote Sens. 2023, 203, 328–344. [Google Scholar] [CrossRef]
  28. Yu, B.; Shang, S. Multi-Year Mapping of Maize and Sunflower in Hetao Irrigation District of China with High Spatial and Temporal Resolution Vegetation Index Series. Remote Sens. 2017, 9, 855. [Google Scholar] [CrossRef]
  29. Bing, Y.; Songhao, S.; Wenxiang, Z.; Pierre, G.; Yizong, C. Mapping daily evapotranspiration over a large irrigation district from MODIS data using a novel hybrid dual-source coupling model. Agric. For. Meteorol. 2019, 276–277, 107612. [Google Scholar] [CrossRef]
  30. Karra, K.; Kontgis, C.; Statman-Weil, Z.; Mazzariello, J.C.; Mathis, M.; Brumby, S.P. Global land use/land cover with Sentinel 2 and deep learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021. [Google Scholar] [CrossRef]
  31. Liu, J.; Sun, S.; Wu, P.; Wang, Y.; Zhao, X. Inter-county virtual water flows of the Hetao irrigation district, China: A new perspective for water scarcity. J. Arid. Environ. 2015, 119, 31–40. [Google Scholar] [CrossRef]
  32. Zhang, X.; Guo, P.; Zhang, F.; Liu, X.; Yue, Q.; Wang, Y. Optimal irrigation water allocation in Hetao Irrigation District considering decision makers’ preference under uncertainties. Agric. Water Manag. 2020, 246, 106670. [Google Scholar] [CrossRef]
  33. Hu, Y.; Zeng, H.; Tian, F.; Zhang, M.; Wu, B.; Gilliams, S.; Li, S.; Li, Y.; Lu, Y.; Yang, H. An Interannual Transfer Learning Approach for Crop Classification in the Hetao Irrigation District, China. Remote Sens. 2022, 14, 1208. [Google Scholar] [CrossRef]
  34. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  35. Tuvdendorj, B.; Zeng, H.; Wu, B.; Elnashar, A.; Zhang, M.; Tian, F.; Nabil, M.; Nanzad, L.; Bulkhbai, A.; Natsagdorj, N. Performance and the Optimal Integration of Sentinel-1/2 Time-Series Features for Crop Classification in Northern Mongolia. Remote Sens. 2022, 14, 1830. [Google Scholar] [CrossRef]
  36. Xun, L.; Zhang, J.; Cao, D.; Yang, S.; Yao, F. A novel cotton mapping index combining Sentinel-1 SAR and Sentinel-2 multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2021, 181, 148–166. [Google Scholar] [CrossRef]
  37. D’andrimont, R.; Verhegghen, A.; Lemoine, G.; Kempeneers, P.; Meroni, M.; van der Velde, M. From parcel to continental scale—A first European crop type map based on Sentinel-1 and LUCAS Copernicus in-situ observations. Remote Sens. Environ. 2021, 266, 112708. [Google Scholar] [CrossRef]
  38. Zhang, X.; Wu, B.; Ponce-Campos, G.E.; Zhang, M.; Chang, S.; Tian, F. Mapping up-to-Date Paddy Rice Extent at 10 M Resolution in China through the Integration of Optical and Synthetic Aperture Radar Images. Remote Sens. 2018, 10, 1200. [Google Scholar] [CrossRef]
  39. Xiao, W.; Xu, S.; He, T. Mapping Paddy Rice with Sentinel-1/2 and Phenology-, Object-Based Algorithm—A Implementation in Hangjiahu Plain in China Using GEE Platform. Remote Sens. 2021, 13, 990. [Google Scholar] [CrossRef]
  40. Liu, W.; Zhang, H. Mapping annual 10 m rapeseed extent using multisource data in the Yangtze River Economic Belt of China (2017–2021) on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103198. [Google Scholar] [CrossRef]
  41. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  42. Tran, K.H.; Zhang, H.K.; McMaine, J.T.; Zhang, X.; Luo, D. 10 m crop type mapping using Sentinel-2 reflectance and 30 m cropland data layer product. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102692. [Google Scholar] [CrossRef]
  43. Chen, J.; Jönsson, P.; Tamura, M.; Gu, Z.; Matsushita, B.; Eklundh, L. A simple method for reconstructing a high-quality NDVI time-series data set based on the Savitzky–Golay filter. Remote Sens. Environ. 2004, 91, 332–344. [Google Scholar] [CrossRef]
  44. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef]
  45. Lee, J.-S. Refined filtering of image noise using local statistics. Comput. Graph. Image Process. 1981, 15, 380–389. [Google Scholar] [CrossRef]
  46. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  47. Yin, L.; You, N.; Zhang, G.; Huang, J.; Dong, J. Optimizing Feature Selection of Individual Crop Types for Improved Crop Mapping. Remote Sens. 2020, 12, 162. [Google Scholar] [CrossRef]
  48. Frampton, W.J.; Dash, J.; Watmough, G.; Milton, E.J. Evaluating the capabilities of Sentinel-2 for quantitative estimation of biophysical variables in vegetation. ISPRS J. Photogramm. Remote Sens. 2013, 82, 83–92. [Google Scholar] [CrossRef]
  49. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  50. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.-F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  51. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  52. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  53. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  54. Zhang, L.; Tang, H.; Shi, P.; Jia, W.; Dai, L. Geographically and Ontologically Oriented Scoping of a Dry Valley and Its Spatial Characteristics Analysis: The Case of the Three Parallel Rivers Region. Land 2023, 12, 1235. [Google Scholar] [CrossRef]
  55. Luo, J.; Ma, X.; Chu, Q.; Xie, M.; Cao, Y. Characterizing the Up-To-Date Land-Use and Land-Cover Change in Xiong’an New Area from 2017 to 2020 Using the Multi-Temporal Sentinel-2 Images on Google Earth Engine. ISPRS Int. J. Geo-Inf. 2021, 10, 464. [Google Scholar] [CrossRef]
  56. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  57. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar] [CrossRef]
  58. Carrão, H.; Gonçalves, P.; Caetano, M. Contribution of multispectral and multitemporal information from MODIS images to land cover classification. Remote Sens. Environ. 2008, 112, 986–997. [Google Scholar] [CrossRef]
  59. Löw, F.; Michel, U.; Dech, S.; Conrad, C. Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using Support Vector Machines. ISPRS J. Photogramm. Remote Sens. 2013, 85, 102–119. [Google Scholar] [CrossRef]
  60. Adrian, J.; Sagan, V.; Maimaitijiang, M. Sentinel SAR-optical fusion for crop type mapping using deep learning and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2021, 175, 215–235. [Google Scholar] [CrossRef]
  61. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  62. Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472–6499. [Google Scholar] [CrossRef]
  63. Debaeke, P.; Attia, F.; Champolivier, L.; Dejoux, J.-F.; Micheneau, A.; Al Bitar, A.; Trépos, R. Forecasting sunflower grain yield using remote sensing data and statistical models. Eur. J. Agron. 2023, 142, 126677. [Google Scholar] [CrossRef]
  64. Nieto, L.; Schwalbert, R.; Prasad, P.V.V.; Olson, B.J.S.C.; Ciampitti, I.A. An integrated approach of field, weather, and satellite data for monitoring maize phenology. Sci. Rep. 2021, 11, 15711. [Google Scholar] [CrossRef]
  65. Zhang, L.; Zhang, Z.; Luo, Y.; Cao, J.; Xie, R.; Li, S. Integrating satellite-derived climatic and vegetation indices to predict smallholder maize yield using deep learning. Agric. For. Meteorol. 2021, 311, 108666. [Google Scholar] [CrossRef]
  66. Liu, S.; Peng, D.; Zhang, B.; Chen, Z.; Yu, L.; Chen, J.; Pan, Y.; Zheng, S.; Hu, J.; Lou, Z.; et al. The Accuracy of Winter Wheat Identification at Different Growth Stages Using Remote Sensing. Remote Sens. 2022, 14, 893. [Google Scholar] [CrossRef]
  67. Fan, L.; Yang, J.; Sun, X.; Zhao, F.; Liang, S.; Duan, D.; Chen, H.; Xia, L.; Sun, J.; Yang, P. The effects of Landsat image acquisition date on winter wheat classification in the North China Plain. ISPRS J. Photogramm. Remote Sens. 2022, 187, 1–13. [Google Scholar] [CrossRef]
  68. Li, S.; Li, F.; Gao, M.; Li, Z.; Leng, P.; Duan, S.; Ren, J. A New Method for Winter Wheat Mapping Based on Spectral Reconstruction Technology. Remote Sens. 2021, 13, 1810. [Google Scholar] [CrossRef]
  69. Pan, L.; Xia, H.; Zhao, X.; Guo, Y.; Qin, Y. Mapping Winter Crops Using a Phenology Algorithm, Time-Series Sentinel-2 and Landsat-7/8 Images, and Google Earth Engine. Remote Sens. 2021, 13, 2510. [Google Scholar] [CrossRef]
  70. Yin, H.; Prishchepov, A.V.; Kuemmerle, T.; Bleyhl, B.; Buchner, J.; Radeloff, V.C. Mapping agricultural land abandonment from spatial and temporal segmentation of Landsat time series. Remote Sens. Environ. 2018, 210, 12–24. [Google Scholar] [CrossRef]
  71. Račič, M.; Oštir, K.; Zupanc, A.; Zajc, L.Č. Multi-Year Time Series Transfer Learning: Application of Early Crop Classification. Remote Sens. 2024, 16, 270. [Google Scholar] [CrossRef]
  72. Gadiraju, K.K.; Vatsavai, R.R. Remote Sensing Based Crop Type Classification Via Deep Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4699–4712. [Google Scholar] [CrossRef]
  73. Munipalle, V.K.; Nelakuditi, U.R.; Nidamanuri, R.R. Agricultural Crop Hyperspectral Image Classification using Transfer Learning. In Proceedings of the 2023 International Conference on Machine Intelligence for GeoAnalytics and Remote Sensing (MIGARS), Hyderabad, India, 27–29 January 2023. [Google Scholar] [CrossRef]
  74. Osman, M.A.A.; Onono, J.O.; Olaka, L.A.; Elhag, M.M.; Abdel-Rahman, E.M. Climate Variability and Change Affect Crops Yield under Rainfed Conditions: A Case Study in Gedaref State, Sudan. Agronomy 2021, 11, 1680. [Google Scholar] [CrossRef]
  75. He, L.; Jin, N.; Yu, Q. Impacts of climate change and crop management practices on soybean phenology changes in China. Sci. Total. Environ. 2019, 707, 135638. [Google Scholar] [CrossRef]
  76. Ozelkan, E.; Chen, G.; Ustundag, B.B. Multiscale object-based drought monitoring and comparison in rainfed and irrigated agriculture from Landsat 8 OLI imagery. Int. J. Appl. Earth Obs. Geoinf. 2016, 44, 159–170. [Google Scholar] [CrossRef]
  77. Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
  78. Wang, S.; Azzari, G.; Lobell, D.B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. Remote Sens. Environ. 2019, 222, 303–317. [Google Scholar] [CrossRef]
  79. Das, T.; Jana, A.; Mandal, B.; Sutradhar, A. Spatio-temporal pattern of land use and land cover and its effects on land surface temperature using remote sensing and GIS techniques: A case study of Bhubaneswar city, Eastern India (1991–2021). GeoJournal 2021, 87, 765–795. [Google Scholar] [CrossRef]
  80. Fathololoumi, S.; Firozjaei, M.K.; Biswas, A. An Innovative Fusion-Based Scenario for Improving Land Crop Mapping Accuracy. Sensors 2022, 22, 7428. [Google Scholar] [CrossRef]
Figure 1. Geographic location and Land Use and Land Cover (LULC) of HID. (a) Geographic location. (b) LULC map for 2021, data from ESRI, with a spatial resolution of 10 m.
Figure 1. Geographic location and Land Use and Land Cover (LULC) of HID. (a) Geographic location. (b) LULC map for 2021, data from ESRI, with a spatial resolution of 10 m.
Agriculture 14 00990 g001
Figure 2. Crop calendars for the four main HID crops. Climatic information was obtained from field visits. “E”, “M”, and “L” refer to the first, middle, and last 10-day phases of the month.
Figure 2. Crop calendars for the four main HID crops. Climatic information was obtained from field visits. “E”, “M”, and “L” refer to the first, middle, and last 10-day phases of the month.
Agriculture 14 00990 g002
Figure 3. An overview map of HID sample point distribution in 2021.
Figure 3. An overview map of HID sample point distribution in 2021.
Agriculture 14 00990 g003
Figure 4. Flow chart of this study.
Figure 4. Flow chart of this study.
Agriculture 14 00990 g004
Figure 5. Schematic diagram of median synthesis, linear shift interpolation, and SG filtering processing.
Figure 5. Schematic diagram of median synthesis, linear shift interpolation, and SG filtering processing.
Agriculture 14 00990 g005
Figure 6. Variation of F1-score under different classification schemes in Table 3. From left to right, the schemes use Optical features, Microwave features, High-importance features + Microwave, and High-importance features. From top to bottom, the schemes use features with 10-, 15-, 20-, and 30-day synthesis intervals. Vertical dashed lines indicate the earliest identifiable time (EIT) of different crops.
Figure 6. Variation of F1-score under different classification schemes in Table 3. From left to right, the schemes use Optical features, Microwave features, High-importance features + Microwave, and High-importance features. From top to bottom, the schemes use features with 10-, 15-, 20-, and 30-day synthesis intervals. Vertical dashed lines indicate the earliest identifiable time (EIT) of different crops.
Agriculture 14 00990 g006
Figure 7. F1-score changes in maize, melon, spring wheat, and sunflower based on model transfer.
Figure 7. F1-score changes in maize, melon, spring wheat, and sunflower based on model transfer.
Agriculture 14 00990 g007
Figure 8. Distribution of different crops in different years over HID region. From left to right are the years 2021 and 2022. From top to bottom are the crops: sunflower, maize, spring wheat, and melon. The locations labeled A–H represent the selected areas for detailed information.
Figure 8. Distribution of different crops in different years over HID region. From left to right are the years 2021 and 2022. From top to bottom are the crops: sunflower, maize, spring wheat, and melon. The locations labeled A–H represent the selected areas for detailed information.
Agriculture 14 00990 g008
Figure 9. The detailed map of concentrated crop cultivation areas in the HID region in 2022.
Figure 9. The detailed map of concentrated crop cultivation areas in the HID region in 2022.
Agriculture 14 00990 g009
Figure 10. The 10-day time series profiles of different crops B, G, R, RE1, RE2, RE3, NIR, SW1, SW2, NDVI, EVI, LSWI, REP, GCVI, LSWI for HID 2021. Error bars in the figure carry one positive/negative standard deviation for sunflower, maize, spring wheat, and melon.
Figure 10. The 10-day time series profiles of different crops B, G, R, RE1, RE2, RE3, NIR, SW1, SW2, NDVI, EVI, LSWI, REP, GCVI, LSWI for HID 2021. Error bars in the figure carry one positive/negative standard deviation for sunflower, maize, spring wheat, and melon.
Agriculture 14 00990 g010
Figure 11. The 10-day time series profiles for different crops VV, VH, VVVH, and VV/VH for HID 2021. Error bars in the figure carry one positive/negative standard deviation for sunflower, maize, spring wheat, and melon.
Figure 11. The 10-day time series profiles for different crops VV, VH, VVVH, and VV/VH for HID 2021. Error bars in the figure carry one positive/negative standard deviation for sunflower, maize, spring wheat, and melon.
Agriculture 14 00990 g011
Figure 12. Contribution of indicators to the classification of crop types.
Figure 12. Contribution of indicators to the classification of crop types.
Agriculture 14 00990 g012
Table 1. Main parameters of the S2 spectral band used in this study.
Table 1. Main parameters of the S2 spectral band used in this study.
Band NoBand TypeCentral Wavelength/nmSpatial Resolution/mAbbreviation Used in This Study
B2Blue49010B
B3Green56010G
B4Red66510R
B5Vegetation Red Edge 170520RE1
B6Vegetation Red Edge 274020RE2
B7Vegetation Red Edge 378320RE3
B8Near-infrared84210NIR
B11Shortwave infrared 1161020SW1
B12Shortwave infrared 2219020SW2
Table 2. Expressions and physical significance of S2 spectral indices used in this study.
Table 2. Expressions and physical significance of S2 spectral indices used in this study.
Spectral IndexExpression Using Sentinel-2 BandsReferencesPhysical Meanings
NDVI(B8 − B4)/(B8 + B4)Huete et al. (2002) [46]Reflective information, such as crop growth trends and health status of different vegetation cover
EVI2.5 × (B8 − B4)/(B8 + 6 × B4 − 7.5 × B2 + 1)Huete et al. (2002) [46] It uses a blue light band that helps suppress atmospheric, background, and soil effects.
LSWI(B8 − B11)/(B8 + B11)Yin et al. (2020) [47]Sensitive to water content and can effectively characterize soil and canopy water content in different vegetation
REP705 + 35 × (((B4 + B7)/2) − B5)/(B6 − B5)Frampton et al. (2013) [48]For assessing the extent of vegetation cover
GCVI(B4/B2) − 1Gitelson et al. (2005) [49]Describes the image synthetic activity of vegetation
SAVI1.5 × (B8 − B4)/(B8 + B4 + 0.5)Huete et al. (2002) [46]Interpretation of changes in the optical characteristics of the background and correction of the sensitivity of NDVI to the soil background
Table 3. Scenario overview of different feature combination strategies for early crop identification.
Table 3. Scenario overview of different feature combination strategies for early crop identification.
ScenarioInterval (day)Features
SI10Optical
SII10Microwave
SIII10High importance features + Microwave
SIV10High importance features
SV15Optical
SVI15Microwave
SVII15High importance features + Microwave
SVIII15High importance features
SIX20Optical
SX20Microwave
SXI20High importance features + Microwave
SXII20High importance features
SXIII30Optical
SXIV30Microwave
SXV30High importance features + Microwave
SXVI30High importance features
Table 4. Optical preference features for different scenarios and their overall importance score values.
Table 4. Optical preference features for different scenarios and their overall importance score values.
Time Interval (Day)Crop TypesFeature SelectionScore
10SunflowerRE2, NIR, RE1, G, EVI, R, RE3, SAVI, REP and B79.84
MaizeRE1, SW1, R, SW2, G, REP, GCVI, LSWI, SAVI and RE287.49
Spring WheatNDVI, EVI, SAVI, R, GCVI, RE2, G, LSWI, NIR and RE189.11
MelonREP, RE2, EVI, SAVI, B, NIR, G, SW1, NDVI and RE375.73
15SunflowerRE3, REP, R, RE2, GCVI, G, NIR, RE1, SAVI and SW275.03
MaizeG, RE1, B, GCVI, SW1, R, SW2, NIR, NDVI and REP82.59
Spring WheatNDVI, SAVI, GCVI, LSWI, R, EVI, SW2, RE3, B and SW187.57
MelonRE2, LSWI, REP, RE3, NIR, G, EVI, SW2, RE1 and GCVI77.32
20SunflowerRE2, REP, RE3, RE1, NDVI, SW2, R, B, NIR and EVI75.21
MaizeRE1, G, SW2, SW1, B, LSWI, REP, R, GCVI and EVI81.38
Spring WheatNDVI, EVI, G, RE1, SAVI, LSWI, RE3, NIR, GCVI and B81.41
MelonRE2, RE1, SAVI, NIR, SW2, EVI, REP, LSWI, RE3 and NDVI75.07
30SunflowerRE2, NIR, RE3, EVI, NDVI, GCVI, REP, B, G and RE175.66
MaizeB, RE1, REP, G, EVI, GCVI, R, SW1, NIR and SW281.85
Spring WheatSAVI, LSWI, RE3, G, EVI, NDVI, B, NIR, SW2 and GCVI83.05
MelonRE2, G, SAVI, SW1, RE1, EVI, SW2, RE3, REP and LSWI75.52
Table 5. HID dry season crop area statistics. Unit: ten thousand hectares.
Table 5. HID dry season crop area statistics. Unit: ten thousand hectares.
Categories20212022
Sunflower43.73 (52.2%)45.06 (53.8%)
Maize26.63 (31.8%)25.88 (30.9%)
Spring Wheat3.33 (4%)3.18 (3.8%)
Melon3.76 (4.5%)3.52 (4.2%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, J.; Xie, M.; Wu, Q.; Luo, J.; Gao, Q.; Shao, X.; Zhang, Y. Early Crop Identification Study Based on Sentinel-1/2 Images with Feature Optimization Strategy. Agriculture 2024, 14, 990. https://doi.org/10.3390/agriculture14070990

AMA Style

Luo J, Xie M, Wu Q, Luo J, Gao Q, Shao X, Zhang Y. Early Crop Identification Study Based on Sentinel-1/2 Images with Feature Optimization Strategy. Agriculture. 2024; 14(7):990. https://doi.org/10.3390/agriculture14070990

Chicago/Turabian Style

Luo, Jiansong, Min Xie, Qiang Wu, Jun Luo, Qi Gao, Xuezhi Shao, and Yongping Zhang. 2024. "Early Crop Identification Study Based on Sentinel-1/2 Images with Feature Optimization Strategy" Agriculture 14, no. 7: 990. https://doi.org/10.3390/agriculture14070990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop