Next Article in Journal
MFTSC: A Semantically Constrained Method for Urban Building Height Estimation Using Multiple Source Images
Previous Article in Journal
Automatic Crop Classification Based on Optimized Spectral and Textural Indexes Considering Spatial Heterogeneity
Previous Article in Special Issue
Exploring the Potential of Multi-Temporal Crop Canopy Models and Vegetation Indices from Pleiades Imagery for Yield Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D-ResNet-BiLSTM Model: A Deep Learning Model for County-Level Soybean Yield Prediction with Time-Series Sentinel-1, Sentinel-2 Imagery, and Daymet Data

1
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 14399-57131, Iran
2
Ludwig-Franzius-Institute for Hydraulic, Estuarine and Coastal Engineering, Leibniz University Hannover, Nienburger Str. 4, 30167 Hannover, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5551; https://doi.org/10.3390/rs15235551
Submission received: 11 September 2023 / Revised: 19 November 2023 / Accepted: 27 November 2023 / Published: 29 November 2023

Abstract

:
Ensuring food security in precision agriculture requires early prediction of soybean yield at various scales within the United States (U.S.), ranging from international to local levels. Accurate yield estimation is essential in preventing famine by providing insights into food availability during the growth season. Numerous deep learning (DL) algorithms have been developed to estimate soybean yield effectively using time-series remote sensing (RS) data to achieve this goal. However, the training data with short time spans can limit their ability to adapt to the dynamic and nuanced temporal changes in crop conditions. To address this challenge, we designed a 3D-ResNet-BiLSTM model to efficiently predict soybean yield at the county level across the U.S., even when using training data with shorter periods. We leveraged detailed Sentinel-2 imagery and Sentinel-1 SAR images to extract spectral bands, key vegetation indices (VIs), and VV and VH polarizations. Additionally, Daymet data was incorporated via Google Earth Engine (GEE) to enhance the model’s input features. To process these inputs effectively, a dedicated 3D-ResNet architecture was designed to extract high-level features. These enriched features were then fed into a BiLSTM layer, enabling accurate prediction of soybean yield. To evaluate the efficacy of our model, its performance was compared with that of well-known models, including the Linear Regression (LR), Random Forest (RF), and 1D/2D/3D-ResNet models, as well as a 2D-CNN-LSTM model. The data from a short period (2019 to 2020) were used to train all models, while their accuracy was assessed using data from the year 2021. The experimental results showed that the proposed 3D-Resnet-BiLSTM model had a superior performance compared to the other models, achieving remarkable metrics (R2 = 0.791, RMSE = 5.56 Bu Ac−1, MAE = 4.35 Bu Ac−1, MAPE = 9%, and RRMSE = 10.49%). Furthermore, the 3D-ResNet-BiLSTM model showed a 7% higher R2 than the ResNet and RF models and an enhancement of 27% and 17% against the LR and 2D-CNN-LSTM models, respectively. The results highlighted our model’s potential for accurate soybean yield predictions, supporting sustainable agriculture and food security.

1. Introduction

A high oil and protein content make soybeans a vital crop for food security. The United States (U.S.) is the leading global producer of this valuable commodity [1,2]. In 2021, the nation accomplished a historic feat by achieving a soybean production of 4.44 billion bushels (https://www.farmprogress.com/crops/farm-futures-survey-finds-record-2021-corn-crop, accessed on 15 January 2022). However, the soybean industry grapples with diverse challenges, from population growth and climate change [3]. Effectively addressing these challenges necessitates comprehensively evaluating crop type, soil quality, climate conditions, environment, diseases, fertilizers, and seeds [4]. The U.S. Department of Agriculture (USDA) does not provide crop yield predictions until the subsequent March [5]. Therefore, early crop yield prediction becomes imperative in preventing famine by assessing food availability throughout the cultivation period. As such, timely and accurate crop yield prediction is paramount in evaluating trade balances, enhancing food security, formulating production, storage, and transportation strategies, and facilitating urbanization [6].
Accurate crop yield prediction relies on two primary techniques: traditional ground observations and advanced Remote Sensing (RS). Traditional methods are highly accurate but are costly and time-consuming, limiting their feasibility for large-scale applications like state-level assessments [4]. In recent years, RS technology has gained popularity for crop yield prediction. Its advantages include large-scale coverage, continuous monitoring, multispectral capabilities, affordability, and long-term data archiving across various spatial, spectral, and temporal resolutions [4,7]. Furthermore, the rich multispectral data within RS images allows us the opportunity to derive a wide range of valuable Vegetation Indices (VIs) (e.g., Normalized Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Green Normalized Vegetation Index (GNDVI)). These indices can be utilized to monitor the phenology and growth of crops. Soil attributes such as pH, type, and moisture, coupled with features like Land Surface Temperature (LST), integrated drought indices, precipitation, vapor pressure, and humidity, have also been employed for crop yield prediction [8,9].
The prediction of crop yield data can be achieved via two main categories of models: process-based biophysical (PB) and machine-learning (ML) models. PB models (e.g., Agricultural Production System Simulator (APSIM), Decision Support System for Agro-Technology Transfer (DSSAT)) dynamically simulate crop yield by employing well-calibrated crop growth models. This framework often uses RS data to reinitialize, recalibrate, or update state variables in a model at a higher spatial resolution than the driving data. Nevertheless, calibrating process-based models at larger scales remains challenging, requiring various field measurements [1,10]. Consequently, in numerous scenarios requiring cost-effectiveness and flexible modeling of intricate patterns, ML-based algorithms are often preferred [11]. Traditional ML models such as Support Vector Machine (SVM) and Random Forest (RF) have proven effective in crop yield prediction [12,13,14]. However, these algorithms might struggle to extract advanced features from the input data. This limitation has driven the adoption of Deep Learning (DL) methods, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and the specific Long Short-Term Memory (LSTM) architecture. These models can extract intricate features from basic ones and effectively represent the complex correlations between the input and output variables using multiple hidden layers [4,15]. Therefore, these models have been widely developed for soybean yield prediction models in recent years. For example, You et al. [16] combined a Gaussian Process component to a CNN or LSTM for predicting crop yield using MODIS Land Surface Temperature (LST) data and MODIS Surface Reflectance (SR) data between 2003 and 2015 in the U.S. Sun et al. [5] also introduced an innovative deep CNN-LSTM to predict soybean yield at the county level within the U.S. from 2003 to 2015 using weather and MODIS LST and SR datasets. Similarly, Terliksiz et al. [17] designed a 3D-CNN model for soybean yield prediction in Lauderdale County using MODIS LST and SR data between 2003 and 2016. Khaki et al. [18] developed a CNN-RNN model that effectively captured the temporal relationships between environmental factors and the genetic enhancement of seeds without requiring access to genotype data. They used yield performance, management, weather data, and soil data variables to predict corn and soybean yield between 1980 and 2018 in the U.S. Khaki et al. [19] also developed the Yield-Net model, which utilized MODIS products, including MOD09A1 and MYD11A2, from 2004 to 2018 to predict crop yields. Schwalbert et al. [20] also designed an LSTM model to forecast soybean yield using Vis like NDVI, EVI, LST, and precipitation during southern Brazil’s growing season between 2003 and 2016. Zhu et al. [21] introduced a DL-based Adaptive Crop Model (DACM) for accurate soybean yield prediction in the U.S. between 2003 and 2017, using MODIS LST data and MODIS SR data.
While the previously mentioned studies have shown commendable progress and promising results in soybean yield estimation, specific challenges persist. The MODIS has been extensively employed for soybean yield prediction due to its high temporal resolution [5,7,9,17], but its accuracy is limited by its low spatial resolution. However, the potential of even higher-resolution images, such as those from Sentinel-2, which provide rich spectral information including red-edge bands, needs more attention. Additionally, the potential of combining Sentinel-2 and Sentinel-1 images, along with weather and climatology variables, to improve prediction accuracy has been less regarded. Furthermore, current approaches often employ 1D/2D-CNN-LSTM models to predict crop yield, limiting their ability to incorporate future data and demanding substantial computational resources [5,22,23]. While these models have demonstrated robust prediction abilities with long-time span training data, assessing their performance in scenarios where the data is limited to a shorter period is imperative.
In response to these challenges, we introduced the 3D-ResNet-BiLSTM model as a solution for early and accurate county-level soybean yield prediction for the U.S. during the growing season, using a short period dataset derived from the Sentinel 1, Sentinel 2, and Daymet data. Employing the 3D-ResNet architecture in our model allows us to capture rich spatial features from the input data, facilitating enhanced feature extraction and improved performance in yield prediction. A notable advantage of the 3D-ResNets is their incorporation of residual blocks, enabling the network to learn residual functions that streamline deep network training [24]. The predictive aspect of our model is powered via the Bidirectional LSTM (BiLSTM) module, enabling bidirectional data utilization during calculations. This bidirectional processing is particularly advantageous for sequential data, incorporating both preceding and subsequent information, resulting in heightened prediction accuracy [22]. Moreover, our proposed method evaluates soybean yield prediction specifically during the growing season, providing valuable insights into temporal variability and challenges, which has received comparatively less attention in the prior literature.
The remainder of this study is structured as follows: Section 2 provides in-depth details on the study area, datasets, methodology, the 3D-ResNet-BiLSTM model, and the evaluation metrics. Section 3 presents the experimental results, while Section 4 describes an extensive discussion that contextualizes the results of the experiments. Finally, Section 5 presents concluding remarks and overall conclusions.

2. Materials and Methods

2.1. Study Area

The study was located in the U.S., including eighteen states: North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, Minnesota, Iowa, Missouri, Arkansas, Louisiana, Wisconsin, Illinois, Michigan, Indiana, Ohio, Kentucky, Tennessee, and Mississippi (see Figure 1). The research was carried out from 2019 to 2021, centering on the growth of soybeans, a key cereal crop cultivated within the study area. Soybeans are commonly sown between May and early June, with harvesting in the late months of September and October (https://www.ers.usda.gov/topics/crops/soybeans-and-oil-crops/oil-crops-sector-at-a-glance/, accessed on 1 September 2021).

2.2. Dataset

This study employed a variety of data sources to predict soybean yield, including Sentinel 1 SAR (COPERNICUS/S1_GRD), Sentinel-2 Surface Reflectance (S2_SR_HARMONIZED), Daymet weather (Daymet V4), USDA Yield, Crop Land Data Layer (CDL), and County Boundaries data.
Sentinel-1 collects data from a dual-polarization C-band Synthetic Aperture Radar (SAR) instrument at 5.405GHz, with each scene including 1 or 2 polarization bands out of four possible options. The available combinations are single-band VV or HH and dual-band VV + VH or HH + HV, with a pixel size of 10 m [25].
Sentinel-2 provides high-resolution, multi-spectral imagery for monitoring vegetation, soil, water cover, and more, with a pixel size of 10, 20, and 60 m [25].
The Daymet data provides highly accurate and detailed gridded estimates of daily weather parameters across Continental North America, Hawaii, and Puerto Rico, with a resolution of 1 km × 1 km. This invaluable resource offers unparalleled insights into these regions’ weather patterns and conditions, allowing for more precise planning and decision making in various fields [26]. The Crop Land Data Layer (CDL) with a spatial resolution of 30 m was retrieved from the USDA, which employs the Decision Tree approach to categorize agricultural areas using various sensors [27]. Non-soybean pixels were masked using CDL.
The USDA creates an annual report outlining crop acreage, yields, areas harvested, and other production information (https://quickstats.nass.usda.gov/, accessed on 15 January 2021). The data from Sentinel 1, Sentinel 2, and Daymet were all retrieved via the Google Earth Engine (GEE) cloud-based platform [28]. Training and test data were gathered within the timeframe of 2019 to 2021. Cloud-covered and non-soybean pixels were excluded to compute specific features. These selected features were then employed as inputs for DL models, enabling the prediction of soybean yield. Table 1 displays the statistical characteristics of yield observations for both the training and test datasets.

2.3. Methodology

This methodology is designed to predict soybean yields at the county level in the U.S. during the in-season period, explicitly focusing on August and utilizing the 3D-ResNet-BiLSTM model. As depicted in Figure 2, the approach involves two fundamental steps. Initially, relevant features are extracted from the Sentinel-1, Sentinel-2, and Daymet data within the GEE platform, resulting in 23 distinct features spanning 2019, 2020, and 2021. These features serve as the independent variables for the model. Correspondingly, the USDA soybean yield data are considered dependent variables, forming the input data for constructing the 3D-ResNet-BiLSTM model.
The input dataset is categorized into training, validation, and test datasets, and the model is trained using data from 2019 and 2020. The input dataset is classified into training, validation, and test datasets, and the model is trained using data from 2019 and 2020. The trained model is then utilized to predict soybean yields based on the test feature vector for the year 2021. Using USDA’s test yield value data from 2021, model predictions are subsequently evaluated. A detailed exposition of the feature extraction and 3D-ResNet-BiLSTM model is provided in Section 2.3.1 and Section 2.3.2.

2.3.1. Feature Selection

In this study, crop yield estimation was facilitated using various RS features. Sentinel-2 SR data were employed to derive various suitable Vis (see Table 2) such as DVI, GNDVI, EVI, LSWI, RVI, SAVI, VARIGREEN, WDRVI, and NDVI, drawing from established works (see Table 2). Furthermore, the predictive power was improved by tapping into the distinct spectral bands of Sentinel-2 data, including Blue, Red, Green, Near Infrared (NIR), narrow NIR (nNIR), Red Edge 1/2/3, and Shortwave Infrared (SWIR) 1/2. In addition, the study incorporated Sentinel-1 SAR polarization VV and VH data alongside weather-derived features from Daymet, like precipitation and vapor pressure. This comprehensive feature set was integrated within the GEE cloud-based platform. The feature generation process for each county within the GEE system included four key steps: (1) the creation of monthly composites, (2) masking out cloud-covered regions, (3) excluding non-soybean areas using the Cropland Data Layer (CDL), and (4) the calculation of the monthly feature averages for soybean fields within each county, delineated by county boundaries. The temporal progression of the extracted features for soybean fields during the planting season is visually depicted in Figure 3.

2.3.2. 3D-ResNet-BiLSTM Model Architecture

The proposed 3D-ResNet-BiLSTM model is a hybrid architecture that combines the 3D-ResNet and BiLSTM, as illustrated in Figure 4. The 3D-ResNet model is initially employed to extract high-level features from the input data previously generated from selected features. Subsequently, the BiLSTM algorithm is utilized to predict soybean yield based on these extracted features. By merging these two components, our model effectively captures intricate relationships between the input RS data and the in situ crop yield, resulting in more accurate predictions.

3D-ResNet Component

Our 3D-ResNet component was designed to handle spatial and temporal factors within SAR, optical, and weather data, ensuring highly accurate soybean crop yield estimations. This design captures crop growth trends and their spatial distribution in fields, as illustrated in Figure 4. The 3D-ResNet consists of three layers, each comprising an Identity block and two Conv3D blocks, custom-tailored to the unique dynamics of soybean crops. Each preceding block’s output is the subsequent input in this cascading design.
The Identity block assumes a central role within this framework, featuring a sequence of 3D convolutional layers and a skip connection block. These skip connections preserve unique SAR, vIs, and weather data attributes, facilitating gradient flow in multi-modal data and enhancing soybean crop estimation [24]. This aspect is crucial in soybean crop estimation, enabling the model to learn and capture intricate relationships between the input features and yield outcomes.
The Conv3D block includes a set of 3D CNNs, enhancing the model’s capacity to analyze spatial and temporal information within SAR, optical VIs, and weather features. Similarly, the Conv3D block, equipped with skip connections, captures spatiotemporal dynamics, which is essential for accurate crop yield estimation [24]. This capacity is precious as it reveals the interplay between the temporal trends and spatial arrangements, providing critical insights into crop development and eventual yield outcomes.
The input X of the identity-block input passes through a sequence of operations: 3D convolutional layer > linear activation function > 3D convolutional layer > linear activation function > 3D convolutional layer, resulting in the extraction of features F I B . These extracted features, F I B , are added to X and then processed via a linear activation function, denoted as f L , which serves as the input for the subsequent Conv3D block (Equations (1) and (2)).
F I B = F I B + X
F I B = f L ( F I B )
The input F I B is then processed via the Conv3D block, which involves a 3D convolutional layer to extract features F X , as given by:
F X = W F I B + b
where W represents the weight matrix and b is the bias term.
Moreover, within the Conv3D block, the input F I B undergoes a series of operations: 3D convolutional layer > linear activation function > 3D convolutional layer > linear activation function > 3D convolutional layer, to extract the F c B features. These features are further added to F X and passed through the function f L to construct the input for the next block, as described by:
F o = F c B + F X
F o = f L ( F o )

Bi-LSTM Component

Following the feature extraction via the 3D-ResNet, the data undergoes Batch Normalization with linear activation and then enters the Bi-LSTM layer with ReLU activation. This configuration enables precise soybean yield prediction, benefiting from the reverse-order hidden state set for context capture [22].
In this way, a BiLSTM cell is initially fed with an input sequence, x = ( x 1 , x 2 , , x n ) , where n represents the length of the sequence. Furthermore, H denotes the forward hidden sequence, H means the backward hidden sequence, and y t = ( y 1 , y 2 , , y n ) is the output sequence. The final encoded output vector is the combined effect of both the forward and backward information flow, i.e., y t = f ( H ,   H ). The mathematical framework of the BiLSTM neural networks’ architecture is presented in Equations (6)–(8) [37]
H = σ ( w h x x t + w h h x h t + b h )
H = σ ( w h x x t + w h h x h t + b h )
y t = w y h h t + w y h h t + b y
where σ represents the sigmoid activation function, mapping values to the [0, 1] range. Finally, a dense layer with a linear activation function is applied to the output of BiLSTM to predict soybean yield.

2.4. Evaluation Metrics

The performance of the proposed and considered models was evaluated using some metrics, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Relative Root Mean Squared Error (RRMSE), and Coefficient of determination (R2), which can be calculated as follows [38,39]:
R M S E = i = 1 N ( y p r e d i y o b s i ) 2 N
M A E = 1 N i = 1 N y p r e d i y o b s i
M A P E = 1 N i = 1 N y p r e d i y o b s i y o b s i
R 2 = 1 i = 1 N ( y p r e d i y o b s i ) 2 i = 1 N ( y o b s i y m e a n ) 2
R R M S E = R M S E y m e a n × 100
where N is the number of the test samples, y o b s i , and y p r e d i , , respectively, are the observed and predicted data i th test samples, and y m e a n represents the average of the observed data.

3. Experimental Results

3.1. Experimental Setup

All the experiments were conducted using the RS data extracted and prepared within GEE. The experiments were implemented using a Python script in Google Colaboratory (Colab), utilizing a TPU and 12 GB of RAM. As previously discussed, the proposed model architecture incorporated 23 features extracted from the Sentinel 1–2 and Daymet data as inputs. Accordingly, our model utilized input tensors with dimensions of 8 × 1 × 1 × 23 (time steps × features) and 9 × 1 × 1 × 23 (time steps × features) for the respective months of August and September, specifically for the in-season growth period. To compare the proposed model’s performance, we evaluated it against 1D/2D/3D-ResNet, ResNet, 2D-CNN-LSTM [5], RF, and LR. We also designed a 3D-RsNet architecture by removing the BiLSTM layer from the proposed architecture. The 1D/2D-ResNet architectures were implemented by replacing the Conv1D/2D layer with a Conv3D layer in the 3D-ResNet architecture. The dense layer was replaced with a Conv3D layer in the 3D-ResNet architecture to form the ResNet architecture. The training phase of all models employed is based on the MAPE as the loss function, coupled with the Adam optimizer set at a uniform learning rate of 1.10. The number of parameters used in the models under consideration is listed in Table 3.
As demonstrated in Table 3, ResNet-based models outperformed the 2D-CNN-LSTM model regarding computational efficiency, making them a more efficient choice for crop yield estimation. The loss curves using training and validation datasets for the proposed method and all considered models have been demonstrated in Figure 5.
As seen in Figure 5, the proposed 3D-ResNet-BiLSTM model exhibited a rapid and substantial reduction in its loss values, markedly diverging from the ResNet model, which displayed a more gradual decline. This observed difference highlights the unique impact of integrating Conv3D and BiLSTM networks into our architectural framework. Examining the validation loss curves revealed higher fluctuations in August compared to September. Moreover, these curves illustrated that including the extracted features in subsequent months reduced the range of fluctuations. The evaluation of validation loss curves for the 2D-CNN-LSTM did not indicate over/underfitting and a marginal improvement in results was possible by extending the number of epochs. However, for the validation loss curves of 2D-ResNet, overfitting was evident between epochs 800 and 1000, marked by a substantial increase in the gap between the validation loss and the training loss. In a broader context, preserving the best model based on loss validation might be more prudent for yield prediction, considering the potential scenario where the model fails to achieve proper convergence with the inclusion of validation data [40]. Our model architecture demonstrated a superior performance to the models evaluated in scenarios with short-period training data.

3.2. Comparative Results of the Soybean Yield Prediction

In this subsection, we presented comparative results for our model and another model under consideration. The results for both the proposed and considered models in predicting soybean yield are displayed in Table 4, covering the growth period during August and September 2021.
The results from Table 4 indicated that the 3D-ResNet-BiLSTM model achieved the best performance, demonstrating its capability for predicting soybean yield using multi-sensor RS data. This model accurately forecasted soybean yield, especially in August, before the harvest season. For instance, the RMSE of the proposed model was 5.53 Bu Ac−1 in August and 5.60 Bu Ac−1 in September, marking an improvement of about 3% and 30% compared to the 3D-ResNet (the second-best model) and the LR (the worst model), respectively. Moreover, the 3D-ResNet-BiLSTM model, with an RRMSE of approximately 10.5% and an R2 of 0.79, emerged as the most accurate soybean yield predictor, closely followed by the 3D-ResNet. This improvement could be attributed to the incorporation of temporal insights complementing the spatial information provided via the 3D architecture.
To better understand our proposed model’s effectiveness, we generated error maps for August and September using our model and the models under consideration, as depicted in Figure 6 and Figure 7.
As observed in Figure 6 and Figure 7, the combined operation of the ResNet network, Conv3d network, and BiLSTM network simultaneously reduced errors and rendered the error maps brighter. The error maps also revealed that counties with lower yields also tended to have higher percentage errors, represented by darker colors on the maps. Several factors could reduce soybean yield, including climate changes, fertilization, irrigation, drought, soil characteristics, disease, and pests. Notably, Oklahoma State had the highest MAPE (132.51%) due to a lack of training data in that particular study. The 2D-CNN-LSTM and LR models exhibited poor alignment between the predicted and observed yield values.
Figure 8 depicts scatter plots between the predicted and observed yields for the proposed and considered models. These scatter plots confirmed the superior performance of the 3D-ResNet-BiLSTM model in yield prediction when using a combination of the Sentinel-1, Sentinel-2, and Daymet data as inputs.
The scatter plots clearly showed lower RMSE and RRMSE values and higher R2 values, indicating a more robust and more accurate relationship between the predicted and observed soybean yield values. Furthermore, using the 3D-ResNet-BiLSTM architecture was notably more effective in improving the accuracy of soybean yield prediction compared to 1D/2D/3D-ResNet, ResNet, 2D-CNN-LSTM, RF, and LR. This highlights the advantage of using feature extraction with 3D-ResNet and yield prediction with BiLSTM, mainly when dealing with limited training samples.
It is important to note that our analysis primarily focused on lower-level features from Sentinel-1 images, which may have influenced the results. Additionally, the quantity of the data used in the analysis can also affect the model’s performance. Nonetheless, these values confirm the robust performance and validity of the proposed method throughout the soybean-growing season.
The R2 values of the proposed method reached 0.794 and 0.788 in August and September, respectively. In Figure 8, when using 3D-ResNet-BiLSTM, the fit line (depicted in blue, representing the regression line between the predicted and observed yield values) is closely aligned with the diagonal line (shown in black, signifying perfect agreement between the predicted and actual yield values), and predictions were clustered reasonably around the diagonal line. This proximity to the diagonal line indicates a stronger correlation between the predicted and actual yield values when using the proposed method. Additionally, the 1D/2D/3D-ResNet and ResNet models demonstrated good agreement between the predicted and observed yield values.
Figure 9 presented a visual depiction highlighting the distribution of soybean yield by comparing the USDA yield with the predicted yield derived from the proposed method. The results in Figure 9 demonstrated a substantial agreement between the observed and predicted soybean yield during the analysis, reinforcing the reliability and accuracy of our proposed method’s predictions.
Based on the USDA yield map presented in Figure 9, it was evident that counties in states such as North Dakota, South Dakota, Kansas, Missouri, Minnesota, and Oklahoma experienced comparatively lower yields in 2021. In contrast, counties like Iowa, Nebraska, Illinois, Indiana, Ohio, Kentucky, Tennessee, Arkansas, Mississippi, Louisiana, Michigan, and Wisconsin displayed higher yields during the same period.
Figure 10 depicts the average accuracy of our proposed method compared to other models for August and September. The average R2 values for the 3D-ResNet-BiLSTM, 3D-ResNet, 2D-ResNet, 1D-ResNet, ResNet, 2D-CNN-LSTM, RF, and LR models were 0.791, 0.779, 0.758, 0.716, 0.60, 0.708, and 0.499, respectively.
In Figure 10, our investigation shows significant performance improvements achieved via various architectural modifications. Firstly, the inclusion of Conv1D demonstrated a noteworthy 3.49% improvement in the performance of the ResNet model. Secondly, incorporating Conv2D contributed a substantial 4.28% improvement in ResNet’s performance. Thirdly, the adoption of Conv3D proved highly effective, resulting in an impressive 6.41% improvement in ResNet’s performance. Lastly, adding the BiLSTM layer enhanced the performance of the 3D-ResNet model by a commendable 1.12%.
Moreover, our proposed model utilizing the Sentinel 1–2 and Daymet data has demonstrated a significant increase of 19.025% in accuracy for soybean yield prediction compared to the 2D-CNN-LSTM model presented by Sun et al. [5]. Error maps indicate that certain cities exhibit the lowest errors in August, while others show the weakest errors in September, indicating the time difference between sowing and harvesting. Our proposed method achieves a high accuracy with an R2 of 0.791 in August and September. Table 5 presents the MAPE metric for each state using the 3D-ResNet-BiLSTM model.
After comparing our proposed method with other models, it became evident that the 1D/2D/3D-ResNet models consistently outperformed the ResNet, 2D-CNN-LSTM, RF, and LR models. Furthermore, we observed that the ResNet model’s yield prediction accuracy improved notably when employing Conv3D layers instead of Conv1D/2D and dense layers. In stark contrast, the Linear Regression model exhibited the poorest performance among all the evaluated models.

4. Discussion

This study introduced the 3D-ResNet-BiLSTM model as a new predictor for forecasting county-level soybean yield using a combination of Sentinel-1 and Sentinel-2 imagery and Daymet climate data. Unlike widely-used approaches [5,9,16,17,19,21] that rely on MODIS products, which are limited by their coarse spatial resolution, our study demonstrates the value of integrating medium-resolution Sentinel 1–2 data with climate data for developing more accurate yield prediction models. Additionally, we achieved improved performance of the 3D-ResNet-BiLSTM model by significantly reducing the input tensor size by a factor of 57.81 compared to MODIS data [5], facilitating early soybean yield predictions and boosting the efficiency of the model training process.
Our study also examined the sensitivity of network architecture complexity in predicting soybean yield, particularly in scenarios with short-period training data. While previous research by Sun et al. [5] has predominantly used 2D-CNN-LSTM architectures for soybean yield prediction, these architectures often encounter an ill-posed problem when confronted with insufficient/short-period training data due to more unknown parameters. Our results demonstrate the capability of the proposed 3D-ResNet-BiLSTM architecture to handle situations with limited/short-period training data effectively.
Furthermore, our research highlights the substantial advantages of combining feature extraction with the ResNet and yield prediction with BiLSTM, leveraging the satisfactory spatial resolution of Sentinel-1 and Sentinel-2 imagery to achieve accurate predictions of soybean yield at the county level. Additionally, our implementation of three CNN models—Conv1D, Conv2D, and Conv3D—revealed that using Conv3D significantly minimized the MAPE than Conv2D and Conv1D (see Table 4). This superior performance of Conv3D can be attributed to its capacity to extract spatial and temporal information from time-series data [23].
While our study demonstrates the effectiveness of the 3D-ResNet-BiLSTM model for soybean yield prediction, further research is needed to fully validate its performance across different geographical regions and under diverse environmental conditions. Additionally, exploring the integration of additional data sources, such as soil data or agricultural management practices, could further enhance the accuracy and generalizability of the model.

5. Conclusions

Soybean, a crucial commodity in U.S. agriculture, demands accurate regional yield forecasting for informed planning decisions. This study harnesses diverse data sources, including Sentinel-1, Sentinel-2, and Daymet data, extracting a comprehensive set of 19 features encompassing spectral bands, vegetation indices, SAR polarizations, and critical weather parameters. A novel 3D-ResNet architecture was designed to process these diverse inputs effectively, featuring a unique combination of 3D convolutional and recurrent layers. This architecture extracts high-level features subsequently fed into a BiLSTM layer, enabling precise prediction of soybean yield. To assess the efficacy of our model, we trained it on data from 2019 to 2020 and set its performance using data from 2021. Evaluating the proposed 3D-ResNet-BiLSTM model against other models revealed its remarkable performance, achieving an R2 of 0.79 and an RMSE of 5.56 Bu Ac-1, surpassing all other considered models by a significant margin. This significant improvement can be mainly due to the model’s capacity to effectively capture spatial and temporal patterns in the data, a crucial aspect for accurate yield prediction in areas with complex terrain and variable weather patterns. These findings underscore the transformative potential of fusing advanced RS data, feature-rich datasets, and state-of-the-art deep learning models to pave the way for data-driven agricultural decision-making. This approach not only enhances yield forecasting accuracy but also holds promise for optimizing resource allocation, improving crop management practices, and ultimately strengthening food security. As we move forward, integrating additional data sources, such as soil data and agricultural management practices, can further enhance the accuracy and generalizability of these models, leading to even more informed and sustainable farming practices.

Author Contributions

Conceptualization, M.F., R.S.-H. and A.M.; Methodology, M.F., R.S.-H. and A.M.; Project administration, R.S.-H.; Resources, M.F., R.S.-H. and A.M.; Validation, M.F., R.S.-H. and A.M.; Supervision, R.S.-H.; Writing—original draft, M.F., R.S.-H. and A.M.; Writing—review and editing, M.F., R.S.-H. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available on reasonable request from the corresponding author.

Acknowledgments

We would like to express our sincere gratitude to Google for providing access to Earth Engine and Colab. These pivotal tools greatly facilitated the execution and analysis of this research. Additionally, we extend our heartfelt thanks to the United States Department of Agriculture (USDA) for generously providing the essential yield values that contributed significantly to the success of this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohite, J.; Sawant, S.; Pandit, A.; Agrawal, R.; Pappula, S. Soybean Crop Yield Prediction by Integration of Remote Sensing and Weather Observations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 197–202. [Google Scholar] [CrossRef]
  2. Fathi, M.; Shah-Hosseini, R.; Moghimi, A. Comparison of Some Deep Neural Networks for Corn and Soybean Mapping in Iowa State using Landsat imagery. Earth Obs. Geomat. Eng. 2022, 6, 57–66. [Google Scholar]
  3. Bharadiya, J.P.; Tzenios, N.T.; Reddy, M. Forecasting crop yield using remote sensing data, rural factors, and machine learning approaches. J. Eng. Res. Rep. 2023, 24, 29–44. [Google Scholar]
  4. Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A systematic literature review on crop yield prediction with deep learning and remote sensing. Remote Sens. 2022, 14, 1990. [Google Scholar] [CrossRef]
  5. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; Lai, Z. County-level soybean yield prediction using deep CNN-LSTM model. Sensors 2019, 19, 4363. [Google Scholar] [CrossRef] [PubMed]
  6. Rashid, M.; Bari, B.S.; Yusup, Y.; Kamaruddin, M.A.; Khan, N. A comprehensive review of crop yield prediction using machine learning approaches with special emphasis on palm oil yield prediction. IEEE Access 2021, 9, 63406–63439. [Google Scholar] [CrossRef]
  7. Qiao, M.; He, X.; Cheng, X.; Li, P.; Luo, H.; Zhang, L.; Tian, Z. Crop yield prediction from multi-spectral, multi-temporal remotely sensed imagery using recurrent 3D convolutional neural networks. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102436. [Google Scholar] [CrossRef]
  8. Zhou, S.; Xu, L.; Chen, N. Rice Yield Prediction in Hubei Province Based on Deep Learning and the Effect of Spatial Heterogeneity. Remote Sens. 2023, 15, 1361. [Google Scholar] [CrossRef]
  9. Sun, J.; Lai, Z.; Di, L.; Sun, Z.; Tao, J.; Shen, Y. Multilevel deep learning network for county-level corn yield estimation in the us corn belt. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5048–5060. [Google Scholar] [CrossRef]
  10. Huang, H.; Huang, J.; Feng, Q.; Liu, J.; Li, X.; Wang, X.; Niu, Q. Developing a dual-stream deep-learning neural network model for improving county-level winter wheat yield estimates in China. Remote Sens. 2022, 14, 5280. [Google Scholar] [CrossRef]
  11. Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  12. Pang, A.; Chang, M.W.; Chen, Y. Evaluation of random forests (RF) for regional and local-scale wheat yield prediction in southeast Australia. Sensors 2022, 22, 717. [Google Scholar] [CrossRef] [PubMed]
  13. Li, Z.; Chen, Z.; Cheng, Q.; Duan, F.; Sui, R.; Huang, X.; Xu, H. UAV-based hyperspectral and ensemble machine learning for predicting yield in winter wheat. Agronomy 2022, 12, 202. [Google Scholar] [CrossRef]
  14. Guo, Y.; Fu, Y.; Hao, F.; Zhang, X.; Wu, W.; Jin, X.; Bryant, C.R.; Senthilnath, J. Integrated phenology and climate in rice yields prediction using machine learning methods. Ecol. Indic. 2021, 120, 106935. [Google Scholar] [CrossRef]
  15. Khankeshizadeh, E.; Mohammadzadeh, A.; Moghimi, A.; Mohsenifar, A. FCD-R2U-net: Forest change detection in bi-temporal satellite images using the recurrent residual-based U-net. Earth Sci. Inform. 2022, 15, 2335–2347. [Google Scholar] [CrossRef]
  16. You, J.; Li, X.; Low, M.; Lobell, D.; Ermon, S. Deep Gaussian process for crop yield prediction based on remote sensing data. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  17. Terliksiz, A.S.; Altýlar, D.T. Use of deep neural networks for crop yield prediction: A case study of soybean yield in Lauderdale county, alabama, USA. In Proceedings of the 2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Istanbul, Turkey, 16–19 July 2019; pp. 1–4. [Google Scholar]
  18. Khaki, S.; Wang, L.; Archontoulis, S.V. A cnn-rnn framework for crop yield prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef]
  19. Khaki, S.; Pham, H.; Wang, L. Simultaneous corn and soybean yield prediction from remote sensing data using deep transfer learning. Sci. Rep. 2021, 11, 11132. [Google Scholar] [CrossRef]
  20. Schwalbert, R.A.; Amado, T.; Corassa, G.; Pott, L.P.; Prasad, P.V.; Ciampitti, I.A. Satellite-based soybean yield forecast: Integrating machine learning and weather data for improving crop yield prediction in southern Brazil. Agric. For. Meteorol. 2020, 284, 107886. [Google Scholar] [CrossRef]
  21. Zhu, Y.; Wu, S.; Qin, M.; Fu, Z.; Gao, Y.; Wang, Y.; Du, Z. A deep learning crop model for adaptive yield estimation in large areas. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102828. [Google Scholar] [CrossRef]
  22. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The performance of LSTM and BiLSTM in forecasting time series. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3285–3292. [Google Scholar]
  23. Rao, C.; Liu, Y. Three-dimensional convolutional neural network (3D-CNN) for heterogeneous material homogenization. Comput. Mater. Sci. 2020, 184, 109850. [Google Scholar] [CrossRef]
  24. Chen, D.; Hu, F.; Nian, G.; Yang, T. Deep residual learning for nonlinear regression. Entropy 2020, 22, 193. [Google Scholar] [CrossRef]
  25. Malenovský, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; García-Santos, G.; Fernandes, R.; Berger, M. Sentinels for science: Potential of Sentinel-1,-2, and-3 missions for scientific observations of ocean, cryosphere, and land. Remote Sens. Environ. 2012, 120, 91–101. [Google Scholar] [CrossRef]
  26. Thornton, P.E.; Thornton, M.M.; Mayer, B.W.; Wilhelmi, N.; Wei, Y.; Devarakonda, R.; Cook, R.B. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2; Oak Ridge National Lab. (ORNL): Oak Ridge, TN, USA, 2014.
  27. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US department of agriculture, national agricultural statistics service, cropland data layer program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  28. Amani, M.; Ghorbanian, A.; Ahmadi, S.A.; Kakooei, M.; Moghimi, A.; Mirmazloumi, S.M.; Moghaddam, S.H.A.; Mahdavi, S.; Ghahremanloo, M.; Parsian, S. Google earth engine cloud computing platform for remote sensing big data applications: A comprehensive review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5326–5350. [Google Scholar] [CrossRef]
  29. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-i. Crop classification from Sentinel-2-derived vegetation indices using ensemble learning. J. Appl. Remote Sens. 2018, 12, 026019. [Google Scholar] [CrossRef]
  30. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  31. Wang, C.; Wu, Y.; Hu, Q.; Hu, J.; Chen, Y.; Lin, S.; Xie, Q. Comparison of Vegetation Phenology Derived from Solar-Induced Chlorophyll Fluorescence and Enhanced Vegetation Index, and Their Relationship with Climatic Limitations. Remote Sens. 2022, 14, 3018. [Google Scholar] [CrossRef]
  32. Richardson, A.J.; Everitt, J.H. Using spectral vegetation indices to estimate rangeland productivity. Geocarto Int. 1992, 7, 63–69. [Google Scholar] [CrossRef]
  33. Christian, J.I.; Basara, J.B.; Lowman, L.E.; Xiao, X.; Mesheske, D.; Zhou, Y. Flash drought identification from satellite-based land surface water index. Remote Sens. Appl. Soc. Environ. 2022, 26, 100770. [Google Scholar] [CrossRef]
  34. Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2001, 76, 156–172. [Google Scholar] [CrossRef]
  35. Eng, L.S.; Ismail, R.; Hashim, W.; Baharum, A. The use of VARI, GLI, and VIgreen formulas in detecting vegetation in aerial images. Int. J. Technol 2019, 10, 1385–1394. [Google Scholar] [CrossRef]
  36. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  37. Bohara, B.; Fernandez, R.I.; Gollapudi, V.; Li, X. Short-Term Aggregated Residential Load Forecasting using BiLSTM and CNN-BiLSTM. In Proceedings of the 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sakheer, Bahrain, 20–21 November 2022; pp. 37–43. [Google Scholar]
  38. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE, and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  39. Moghimi, A.; Celik, T.; Mohammadzadeh, A. Tensor-based keypoint detection and switching regression model for relative radiometric normalization of bitemporal multispectral images. Int. J. Remote Sens. 2022, 43, 3927–3956. [Google Scholar] [CrossRef]
  40. Zhang, H.; Zhang, L.; Jiang, Y. Overfitting and underfitting analysis for deep learning based end-to-end communication systems. In Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an, China, 23–25 October 2019; pp. 1–6. [Google Scholar]
Figure 1. Study area: U.S. states outlined in red indicate the specific focus for county-level soybean yield estimation. The soybean crops displayed are from the 2021 USDA NASS Cropland Data Layer.
Figure 1. Study area: U.S. states outlined in red indicate the specific focus for county-level soybean yield estimation. The soybean crops displayed are from the 2021 USDA NASS Cropland Data Layer.
Remotesensing 15 05551 g001
Figure 2. Workflow of the proposed method.
Figure 2. Workflow of the proposed method.
Remotesensing 15 05551 g002
Figure 3. The time series curve of the extracted features for soybean fields during planting.
Figure 3. The time series curve of the extracted features for soybean fields during planting.
Remotesensing 15 05551 g003
Figure 4. The architecture of the proposed 3D-ResNet-BiLSTM model.
Figure 4. The architecture of the proposed 3D-ResNet-BiLSTM model.
Remotesensing 15 05551 g004
Figure 5. Loss curves using proposed and considered models on training and validation datasets.
Figure 5. Loss curves using proposed and considered models on training and validation datasets.
Remotesensing 15 05551 g005
Figure 6. Error maps generated using both the proposed and considered models in August.
Figure 6. Error maps generated using both the proposed and considered models in August.
Remotesensing 15 05551 g006
Figure 7. Error maps generated using both the proposed and considered models in September.
Figure 7. Error maps generated using both the proposed and considered models in September.
Remotesensing 15 05551 g007
Figure 8. Scatter plots of growing in-season predicted vs. USDA yield using the proposed method and compared methods in 2021.
Figure 8. Scatter plots of growing in-season predicted vs. USDA yield using the proposed method and compared methods in 2021.
Remotesensing 15 05551 g008
Figure 9. Map of USDA soybean yield and predicted soybean yield in 2021.
Figure 9. Map of USDA soybean yield and predicted soybean yield in 2021.
Remotesensing 15 05551 g009
Figure 10. The average accuracy of our proposed method and considered models between August and September.
Figure 10. The average accuracy of our proposed method and considered models between August and September.
Remotesensing 15 05551 g010
Table 1. Sample plot yield statistics for the year in the study area.
Table 1. Sample plot yield statistics for the year in the study area.
TypeYearNumber of SamplesMin
(Bu AC−1)
Max
(Bu AC−1)
Mean
(Bu AC−1)
Std.
(Bu AC−1)
train201943721.8065.5049.648.15
train202068224.7072.3052.378.58
test202160113.8077.3053.2512.20
Table 2. The extracted Indicators from Sentinel 1 and Sentinel 2.
Table 2. The extracted Indicators from Sentinel 1 and Sentinel 2.
NameFormulaRef.
Normalized Difference Vegetation Index (NDVI) ρ N i r     ρ R e d ρ N i r   +   ρ R e d [29]
Wide Dynamic Range Vegetation Index (WDRVI) 0.1   ×   ρ N i r     ρ R e d 0.1   ×   ρ N i r   +   ρ R e d [30]
Enhanced Vegetation Index (EVI) 2.5   ×   ( ρ N i r     ρ R e d ) ( ρ N i r   +   6   ×   ρ R e d     7.5   ×   ρ B l u e   +   1 ) [31]
Difference Vegetation Index (DVI) ρ N i r ρ R e d [32]
Land Surface Water Index (LSWI) ρ N i r     ρ S w i r ρ N i r   +   ρ S w i r [33]
Ratio Vegetation Index (RVI) ρ N i r ρ R e d [34]
Visible Atmospherically Resistant Index Green (VARIgreen) ρ G r e e n     ρ R e d ρ G r e e n   +   ρ R e d     ρ B l u e [35]
Soil Adjusted Vegetation Index (SAVI) ρ N i r     ρ R e d ρ N i r   +   ρ R e d   +   0.5 × 1.5 [36]
Green Normalized Difference Vegetation
Index (GNDVI)
ρ N i r     ρ G r e e n ρ N i r   +   ρ G r e e n [30]
Table 3. The number of parameters and run time for all of the considered models in soybean yield prediction.
Table 3. The number of parameters and run time for all of the considered models in soybean yield prediction.
Aug.Sept.
ModelParameterTimeParametersTime
3D-ResNet-BiLSTM12,92907 min 25 s12,92907 min 59 s
3D-ResNet243306 min 39 s244106 min 56 s
2D-ResNet243305 min 05 s244105 min 20 s
1D-ResNet243305 min 05 s244105 min 09 s
ResNet450503 min 49 s480903 min 48 s
2D-CNN-LSTM372,35315 min 21 s375,74518 min 53 s
Table 4. Performance of proposed and considered models for soybean yield prediction during the Growing In-Season period (i.e., August and September).
Table 4. Performance of proposed and considered models for soybean yield prediction during the Growing In-Season period (i.e., August and September).
Aug.
ModelRMSE
(Bu Ac−1)
R2MAE
(Bu Ac−1)
MAPE (%)RRMSE (%)
3D-ResNet-BiLSTM5.530.794.288.8010.38
3D-ResNet5.710.784.509.4110.72
2D-ResNet6.030.754.8510.1311.32
1D-ResNet6.120.744.9610.4511.49
ResNet6.340.735.2310.9911.90
2D-CNN-LSTM7.610.616.0512.6414.29
RF6.560.715.4411.2212.31
LR7.550.615.7310.7714.10
Sep.
ModelRMSE
(Bu Ac−1)
R2MAE
(Bu Ac−1)
MAPE (%)RRMSE (%)
3D-ResNet-BiLSTM5.600.794.429.2110.61
3D-ResNet5.720.784.489.4310.74
2D-ResNet5.950.764.659.7211.17
1D-ResNet6.050.754.8310.1911.36
ResNet6.650.705.5011.7412.48
2D-CNN-LSTM7.790.596.4013.5714.62
RF6.590.715.4411.2312.37
LR9.580.387.3213.0617.99
Table 5. The evaluation of soybean yield prediction in each U.S. state was conducted based on the proposed 3D-ResNet-BiLSTM model.
Table 5. The evaluation of soybean yield prediction in each U.S. state was conducted based on the proposed 3D-ResNet-BiLSTM model.
U.S. StateRMSE
(Bu Ac−1)
MAE
(Bu Ac−1)
MAPE (%)RRMSE (%)
Arkansas6.745.6011.0813.00
Illinois5.174.176.578.17
Indiana4.533.495.747.55
Iowa5.974.817.629.59
Kansas5.544.7513.2513.75
Kentucky4.503.756.627.92
Louisiana6.946.0510.7512.62
Michigan3.622.865.757.07
Minnesota5.574.3811.3611.30
Mississippi5.013.837.549.06
Missouri5.124.089.0710.61
Nebraska5.724.627.439.32
North Dakota6.545.4025.0925.01
Ohio4.083.465.947.13
Oklahoma18.2918.29132.51132.51
South Dakota4.213.609.9810.64
Tennessee4.413.366.488.65
Wisconsin8.746.2211.1615.53
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fathi, M.; Shah-Hosseini, R.; Moghimi, A. 3D-ResNet-BiLSTM Model: A Deep Learning Model for County-Level Soybean Yield Prediction with Time-Series Sentinel-1, Sentinel-2 Imagery, and Daymet Data. Remote Sens. 2023, 15, 5551. https://doi.org/10.3390/rs15235551

AMA Style

Fathi M, Shah-Hosseini R, Moghimi A. 3D-ResNet-BiLSTM Model: A Deep Learning Model for County-Level Soybean Yield Prediction with Time-Series Sentinel-1, Sentinel-2 Imagery, and Daymet Data. Remote Sensing. 2023; 15(23):5551. https://doi.org/10.3390/rs15235551

Chicago/Turabian Style

Fathi, Mahdiyeh, Reza Shah-Hosseini, and Armin Moghimi. 2023. "3D-ResNet-BiLSTM Model: A Deep Learning Model for County-Level Soybean Yield Prediction with Time-Series Sentinel-1, Sentinel-2 Imagery, and Daymet Data" Remote Sensing 15, no. 23: 5551. https://doi.org/10.3390/rs15235551

APA Style

Fathi, M., Shah-Hosseini, R., & Moghimi, A. (2023). 3D-ResNet-BiLSTM Model: A Deep Learning Model for County-Level Soybean Yield Prediction with Time-Series Sentinel-1, Sentinel-2 Imagery, and Daymet Data. Remote Sensing, 15(23), 5551. https://doi.org/10.3390/rs15235551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop