Next Article in Journal
Effect of Chinese Milk Vetch on Zinc Content and Zinc Absorption of Rice in Purple Tidal Mud Soil
Previous Article in Journal
Multi-Omics Analysis Reveals the Molecular Mechanisms of the Glycolysis and TCA Cycle Pathways in Rhododendron chrysanthum Pall. under UV-B Stress
Previous Article in Special Issue
Performance of Machine Learning Models in Predicting Common Bean (Phaseolus vulgaris L.) Crop Nitrogen Using NIR Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Transformer-CNN Approach for Predicting Soil Properties from LUCAS Vis-NIR Spectral Data

College of Information and Technology, Jilin Agricultural University, Changchun 130118, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(9), 1998; https://doi.org/10.3390/agronomy14091998
Submission received: 29 July 2024 / Revised: 24 August 2024 / Accepted: 29 August 2024 / Published: 2 September 2024
(This article belongs to the Special Issue The Use of NIR Spectroscopy in Smart Agriculture)

Abstract

:
Soil, a non-renewable resource, requires continuous monitoring to prevent degradation and support sustainable agriculture. Visible-near-infrared (Vis-NIR) spectroscopy is a rapid and cost-effective method for predicting soil properties. While traditional machine learning methods are commonly used for modeling Vis-NIR spectral data, large datasets may benefit more from advanced deep learning techniques. In this study, based on the large soil spectral library LUCAS, we aimed to enhance regression model performance in soil property estimation by combining Transformer and convolutional neural network (CNN) techniques to predict 11 soil properties (clay, silt, pH in CaCl2, pH in H2O, CEC, OC, CaCO3, N, P, and K). The Transformer-CNN model accurately predicted most soil properties, outperforming other methods (partial least squares regression (PLSR), random forest regression (RFR), support vector machine regression (SVR), Long Short-Term Memory (LSTM), ResNet18) with a 10–24 percentage point improvement in the coefficient of determination (R2). The Transformer-CNN model excelled in predicting pH in CaCl2, pH in H2O, OC, CaCO3, and N (R2 = 0.94–0.96, RPD > 3) and performed well for clay, sand, CEC, P, and K (R2 = 0.77–0.85, 2 < RPD < 3). This study demonstrates the potential of Transformer-CNN in enhancing soil property prediction, although future work should aim to optimize computational efficiency and explore a wider range of applications to ensure its utility in different agricultural settings.

1. Introduction

The European Union (EU) aims to ensure that 75% of soils are healthy by 2030 to secure healthy food, people, nature, and climate. As of 2020, approximately 60–70% of EU soils remain unhealthy [1]. Healthy soils are the foundation of agricultural production and a crucial component of the ecosystem. Effective soil management and conservation practices, such as crop rotation, cover cropping, and reduced fertilizer usage, sustain soil fertility, enhance biodiversity, and facilitate the colonization of beneficial microorganisms [2]. Soil with appropriate elemental content provides essential nutrients for plants, promoting healthy growth. It enhances soil moisture retention and increases water-holding capacity [3]. Soil testing and fertilizer field trials help ascertain crop fertilizer demand patterns and soil nutrient-supplying capacity, thereby enabling the scientific formulation of effective fertilizer application programs [4]. Therefore, predicting soil elemental content is crucial for guiding agricultural production, promoting plant growth, and reducing costs. Spectral remote sensing acquires soil spectral information without direct contact with the ground, making it an indispensable tool for soil investigation and analysis [5].
The key stages of spectral soil property prediction are data preprocessing and model building. Data preprocessing primarily aims to enhance model accuracy and robustness. Common mathematical transformations, Savitzky–Golay (SG) smoothing, and differential preprocessing methods are effective in improving prediction models [6,7]. SG smoothing effectively suppresses spectral noise and reduces random noise impact [8]. Differential processing mitigates instrumental background and drift influence on the signal, amplifying the spectral response [9]. Recently, more researchers have employed machine learning and deep learning techniques to explore the relationship between hyperspectral data and soil elemental content. PLSR [10], RFR [11], and SVR remain robust prediction models in this field. With advancing computational power, deep learning is gradually being applied across various domains. CNNs [12] are widely used in computer vision and geographic information systems (GISs) due to their exceptional performance and versatility [13,14,15]. Integrating mathematical concepts into differential convolutional neural networks facilitates faster error minimization and convergence [16]. Combining CNN component construction with hyperparameter tuning can significantly advance CNN application in soil spectroscopy modeling [17]. The Transformer model, proposed by Google researchers in 2017, is a deep learning method based on an attention mechanism without convolutional or recurrent units. The Transformer model is capable of time series prediction [18], air quality prediction [19], risk prediction [20], and more. Its architecture can be scaled to the training dataset size and model, enabling efficient parallel training and long-range dependency capture [19]. These advantages motivated us to consider using such models for spectral soil content prediction.
The LUCAS 2009 Topsoil Survey dataset [21] is one of the most comprehensive and consistent soil datasets on a continental scale. Numerous studies have utilized Vis-NIR spectra from the LUCAS dataset to predict various soil properties. Evangelos Simpris [22] proposed using stacked autoencoders to predict eight key soil properties simultaneously, improving prediction accuracy. Zhong et al. [23] evaluated data preprocessing and multitasking Deep Convolutional Neural Network (DCNN) models, finding they outperformed shallow CNN and other traditional machine learning methods. Singh et al. [24] used principal component analysis and locally preserved projections to reduce hyperspectral data dimensionality and proposed a framework combining hybrid features and Recurrent Neural Network (RNN) variants (LSTM and Gated Recurrent Unit (GRU)), demonstrating enhanced predictive capabilities. Stanislaw Grushchensky et al. [25] compared the effectiveness of different machine-learning models for large spectral libraries. Hamid Tavakoli et al. [26] investigated stacked models’ role in improving the prediction accuracy of traditional individual models. Midi Wan et al. [27] proposed a near-infrared spectral masking autoencoder that learns highly robust and generalized spectral features from a large public near-infrared spectral dataset. Alex Wangic et al. [28] compared the performance of laser-induced breakdown spectroscopy (LIBS) and visible near-infrared spectroscopy (Vis-NIRS) in predicting soil organic carbon (SOC), texture, extractable phosphorus, clay, and organic carbon ratio. Existing studies have demonstrated the ability to predict a broad spectrum of soil properties with high accuracy (R2 > 0.8). Although these studies have been extensively discussed and have yielded reasonably accurate predictions, many have excluded samples from certain portions of the LUCAS dataset and selectively focused on specific soil properties. While this approach may yield favorable results, it does not fully capture the comprehensive nature of the dataset. Building upon prior research, this study aims to comprehensively explore the complete LUCAS dataset and leverage deep learning techniques to enhance prediction accuracy, potentially opening new avenues for soil property prediction.
Many studies have demonstrated that integrating different types of neural networks significantly improves model performance. Yuhao Wang et al. [29] used a deep learning LSTM model combined with an attention mechanism and peak NDVI images to generate more accurate and timely predictions. Shujun Liu et al. [30] utilized a tensor connection module to combine temporal convolutional networks (TCN) and LSTM neural networks for wind power prediction, achieving excellent results. The powerful sequence processing capabilities of Transformers, combined with the superior feature extraction capabilities of CNN, have improved performance in both predictive modeling and image processing tasks. This fusion approach leverages the strengths of different models to provide richer feature representations, significantly advancing various application scenarios [31,32,33]. These successes support our use of a similar parallel approach in this study. This study introduces an innovative approach combining Transformer and CNN models to enhance prediction accuracy and model stability. Transformer models excel in capturing long-range dependencies, while CNN is effective at extracting localized features. In contrast to several recent studies [34,35,36,37] that primarily focus on improving accuracy under specific soil properties or controlled conditions, this study seeks to broaden the model’s applicability across a wider range of soil properties and environmental contexts. By doing so, it not only advances the current state of the art in soil property prediction but also addresses practical challenges that hinder the deployment of deep learning models in soil science.
By combining these techniques, we can leverage their strengths for more comprehensive and accurate soil property predictions. Traditional machine learning methods frequently struggle to capture the full complexity and variability of soil spectral data, particularly when confronted with noise and high-dimensional features. These limitations impede both the accuracy and scalability of soil property predictions. Our goal is to overcome traditional model limitations and provide a more robust and efficient tool for soil property prediction through this fusion approach. We will objectively evaluate the strengths and weaknesses of this approach by comparing its performance against traditional machine learning models and existing literature methods. We will explore its applicability and superiority in predicting different soil properties. The evaluation will be multi-dimensional, including prediction accuracy, computational efficiency, applicability, and limitations.

2. Materials and Methods

2.1. LUCAS Database of Topsoil

The LUCAS 2009 Topsoil Survey dataset is one of the most comprehensive and harmonized soil datasets on a continental scale [38]. The dataset comprises around 20,000 samples from 25 member countries. Sampling sites were selected based on a standardized procedure, collecting samples at designated locations through a composite sampling process. At each sampling point, five soil samples were collected: one at the center, coinciding with the LUCAS point, and the other four in a cross shape, 2 m from the center. Approximately 0.5 kg of topsoil samples (0–20 cm) were collected at each point, representing the area of the LUCAS point. The soil sampling points in this database are primarily biased towards agricultural land (especially cropland) and to a lesser extent grassland and woodland. The LUCAS soil points reflect the land use and topography of the countries to varying degrees, depending on land use and topography heterogeneity. The LUCAS measurements exclude areas above 1000 m above sea level. All samples were registered and visually inspected; mineral soils were air-dried and repackaged. Samples were sent to laboratories for analysis, including measurements of coarse fragments, particle size distribution (% clay, silt, and sand content), pH (in CaCl2 and H2O), organic carbon (g/kg), carbonate content (g/kg), phosphorus content (mg/kg), total nitrogen content (g/kg), extractable potassium content (mg/kg), cation exchange capacity (cmol(+)/kg), and multispectral properties.

2.1.1. Spectral Measurements and Pre-Processing

The visible-near infrared (Vis-NIR) absorbance spectra of the samples were measured using a FOSS XDS Rapid Content Analyzer (NIRSystems, INC., Hilleroed, Denmark) within the wavelength range of 400–2400.5 nm, with a spectral resolution of 0.5 nm. Each sample was scanned twice, and the average values were used for subsequent analysis. Although spectral pre-processing was not the primary focus of this experiment, a standard approach was adopted based on similar studies. Referring to the study by Tavakoli et al. [26], the Savitzky–Golay (SG) smoothing algorithm [39] was used to filter the raw data with a first-order polynomial fit and an 11-point window size, followed by first-order differentiation of the smoothed data. Data in the wavelength range of 400–499.5 nm were excluded due to noise, and the data were downsampled by retaining one value every 20 nm, resulting in 200 data points after pre-processing (Figure 1).

2.1.2. Splitting of Soil Samples

The LUCAS database for this experiment contains 19,036 soil samples, released after outliers were detected and removed; no additional outlier analysis was performed in this paper. We used all available soil samples in the dataset, including mineral and organic soil samples, without considering additional information such as geographic origin or soil class. Missing values in the samples were filled using the mean, a common preprocessing method [40]. First, we referred to other research data segmentation methods [22], and used the Conditional Latin Hypercube Sampling algorithm [41] to segment the data into a training set (66. 6 ¯ %, 12,690 samples) and an independent test set (33. 3 ¯ %, 6346 samples) (Table 1). The training set was then divided into five folds for quintuple cross-validation using the fuzzy C-means algorithm [42]. Each fold was used in turn as a validation set, with the remaining four folds combined into a training set, repeating this process five times. The training set was used to learn the model parameters, while the validation set was used to prevent overfitting and optimize hyperparameter combinations. In the final testing phase, the test set obtained from the data partition was used for model evaluation. This test set was independent of the cross-validation process and was not used for model training or validation.

2.2. Modeling Methodology

We propose a model that combines Transformer and CNN in parallel for the processing and analysis of hyperspectral data. The Transformer captures global dependencies within sequences, while the CNN extracts local features. Initially, both components process the hyperspectral data in parallel, and their outputs are then fused to enhance prediction accuracy and robustness. This architecture effectively overcomes the limitations of traditional models in handling complex, high-dimensional spectral data, improving both performance and generalization through comprehensive optimization.

2.2.1. Transformer

So far, the Transformer and its variants have become the preferred choice for natural language processing tasks. This pioneering work marked a significant departure from traditional RNN and CNN architectures by introducing the self-attention mechanism, which has been successfully applied to machine translation. Beyond the advances in machine translation, the Transformer has shown substantial advantages over traditional RNN and CNN in various natural language processing tasks. Since its inception, numerous models have been developed to address a wide range of tasks across different disciplines, simplifying the interpretation and processing of sequences. The multi-head attention mechanism enables the Transformer model to learn contextual information sequentially without long-range dependency issues [43]. The Transformer model achieves efficient sequence learning through a highly parallelizable self-attention mechanism, easily establishing relationships between different features at various locations [44]. The self-attention mechanism is enhanced with multiple attention heads, each learning distinct attention weights to better capture different types of relationships. Multi-head attention enables the model to process different information subspaces in parallel. Deep learning models are often computationally expensive and require significant training time. However, Transformer models mitigate some of the challenges inherent in traditional deep-learning sequence modeling approaches. Their high parallelizability and simple components, such as the attention module and fully connected layer, make them computationally efficient and appealing [45]. In contrast, traditional methods, like linear regression, are constrained by their reliance on linear relationships between inputs and outputs, which hinders their ability to handle complex nonlinear spectral data. Additionally, these methods are prone to overfitting and lack generalization capabilities when dealing with high-dimensional data [46].
Recent research has demonstrated the feasibility of applying the Transformer architecture to forecasting tasks. Wu et al. [47], for instance, used a decoder design similar to the original Transformer architecture in a case study on influenza-like illness (ILI) forecasting to predict time series data. Qu et al. [48] combined features for wind power forecasting by removing the embedding and SoftMax layers of the Transformer, enabling the model to effectively capture key information from wind data. Nascimento et al. [49] integrated wavelet transform into a Transformer-based architecture, a technique that increases the number of features in the data by extracting more relevant information from the existing data. This Transformer-based approach serves as a general framework for modeling various nonlinear dynamical systems, supporting its application in nonlinear spectral data analysis. In this paper, we develop a Transformer model to predict multiple hyperspectral soil properties, implementing the architecture from “Attention is All You Need” [50] using Tensorflow. The model includes a Transformer module with an encoder and a fully connected layer; the encoder consists of a multi-head attention layer and a feedforward layer (Figure 2). The Transformer model uses the following parameters: an embedding dimension of 64, projecting input data into a high-dimensional space, and 4 attention heads. The multi-head attention mechanism allows the model to focus on multiple information subspaces simultaneously; a drop probability (drop_prob) of 0.1 is used. Each Transformer module contains a Multihead Attention Layer, Feedforward Layer, Layer Normalization, and Dropout Layer. The Feedforward Layer consists of two fully connected layers: the first with 4 times the number of neurons in the embedding dimension and using Leaky ReLU activation, and the second with 2 times the number of neurons in the embedding dimension.
Hyperspectral imaging captures soil spectral information at multiple wavelengths, critical for determining soil’s physical and chemical properties. Traditional regression models often struggle to capture complex relationships between wavelengths in high-dimensional spectral data. In contrast, Transformers can efficiently process and analyze high-dimensional data through their self-attention mechanism. Data in each spectral band are treated as part of a sequence, allowing the model to establish global dependencies between bands. The multi-attention mechanism enables the model to focus on multiple wavelength interactions simultaneously, improving prediction accuracy and robustness. Consequently, the Transformer model extracts soil information from preprocessed spectral features and inputs these into the encoder, with the output passing through the fully connected layer.

2.2.2. CNN

A typical CNN comprises an input layer, convolutional layer, pooling layer, fully connected layer, and output layer [51]. The input layer receives data as vectors and matrices, preserving internal structure integrity. The convolutional layer scans localized regions of the input data using convolutional kernels with shared weights, partially connecting to previous layer neurons. More convolutional kernels enable the extraction of more abstract features. The pooling layer reduces feature size by nonlinear downsampling, shrinking network size and improving computational efficiency. The fully connected layer’s neurons connect to all previous layer elements, converting features into one-dimensional vectors. The output layer, typically a fully connected layer, generates the model’s final output. The CNN also incorporates dropout and activation functions. Dropout randomly discards network units during training to prevent overfitting. The activation function fits the nonlinear mapping of features extracted from the convolutional layer, enhancing CNN’s ability to handle nonlinear data [52].
This paper designs and implements a CNN for processing spectral data and making predictions. The model includes three convolutional layers, each followed by a batch normalization layer and an average pooling layer. The model uses 64, 128, and 256 filters with a convolutional kernel size of 5 and a step size of 2 for each layer. Finally, the prediction results are output through the fully connected layer. CNN offers significant advantages in hyperspectral soil property prediction. Hyperspectral imaging captures soil spectral information at multiple wavelengths, crucial for determining its physical and chemical properties. CNN excels in regression tasks by efficiently capturing local features and gradually aggregating them into global information when processing high-dimensional spectral data. Data in each spectral band are considered part of the image, and the model extracts local features between different wavelengths through convolutional operations.

2.2.3. Model Architecture

In soil property prediction, uncertainty may be introduced by noise effects in the data, choice of model parameters, and data preprocessing methods. Specifically, noise in spectral data may lead to input uncertainty, while the choice of different preprocessing methods (e.g., smoothing, mathematical transformations) may also have an impact on the final prediction of the model. Uncertainty quantification (UQ) methods play a pivotal role in reducing Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of uncertainties during both optimization and decision-making processes. Dropout can be used to quantify the model’s prediction uncertainty by randomly dropping neurons during the testing phase to simulate multiple runs of the model [53]. When using CNN and Transformer models in parallel, the input data are processed by both models. The CNN model extracts local features using convolutional and pooling layers, progressively capturing the spatial features of the input data and generating corresponding feature representations. Meanwhile, the Transformer model captures global information and sequential relationships, modeling long-range dependencies between input data through multi-head attention. The outputs of both models are fused through a joint operation to form a combined feature representation. This fused feature representation is then fed into a fully connected layer for final prediction or classification. During training, the parameters of the CNN and Transformer models can be optimized simultaneously in an end-to-end manner (Table 2), allowing them to work together to improve performance and generalization ability. This parallel structure leverages the respective strengths of CNN and Transformer, which we call Transformer-CNN, to enhance the model’s capability in modeling complex and high-dimensional data. This approach is suitable for tasks that require both local and global information and plays a significant role in the regression analysis of hyperspectral soil data.

2.2.4. Model Assessment

In this investigation, the coefficient of determination (R2), root mean square error (RMSE), and relative prediction deviation (RPD) are taken as performance evaluation metrics. R2 is utilized to assess the goodness-of-fit between the model’s predicted values and the actual observations, with values approaching 1 indicating greater accuracy of the model. RMSE represents the average deviation between predicted and actual values, with lower values signifying better performance. RPD calculated as the ratio of the standard deviation to the RMSE, is employed to evaluate the model’s accuracy, where a value greater than 2 suggests enhanced predictive capability. Their calculation formulas are shown in Equations (1)–(3), respectively:
R 2 = 1 i = 1 n y i y ^ i 2 i = 1 n y i y ¯ 2
RMSE = 1 n i = 1 n y i y ^ i 2
RPD = S D R M S E = 1 n i = 1 n y i y ¯ 2 / 1 n i = 1 n y i y ^ i 2
In predictive model evaluation, RPD is an important metric for assessing the accuracy of a model. The higher the RPD value, the better the predictive ability of the model (Table 3). Specifically, an RPD greater than 3 indicates that the model is an excellent model (rank A); an RPD between 2.5 and 3.0 indicates that the model is a good model (rank B); an RPD between 2.0 and 2.5 indicates that the model is an approximate model (rank C); whereas an RPD of less than 2.0 indicates that the model is unsatisfactory (rank D) [27].

2.2.5. Experimental Set-Up

Computations were performed using a 13th generation Intel® Core™ i9-13900KF CPU and an NVIDIA® GeForce RTX 4090 GPU (24GB graphics memory). The operating system is Windows 11, the programming language is Python 3.7.16, and the deep learning framework is TensorFlow-gpu 2.6.0.

3. Results

3.1. Evaluation Results of Transformer-CNN

This section presents regression predictions for eleven soil properties (Clay, Silt, Sand, pH in CaCl2, pH in H2O, CEC, OC, CaCO3, N, P, and K). Ablation experiments were conducted with the proposed model to evaluate its performance using metrics such as R2, RMSE, and RPD (Table 4). The Transformer model showed superior predictive accuracy compared to the CNN model. Despite their similarities, the Transformer-CNN hybrid model outperformed the Transformer alone in prediction performance. Performance metrics indicate that most soil properties were predicted with satisfactory accuracy. Specifically, the Transformer-CNN model exhibited excellent predictive performance for pH in CaCl2, pH in H2O, OC, CaCO3, and N, with R2 values ranging from 0.94 to 0.96 and RPD values exceeding 3. It achieved good performance for Clay, Sand, CEC, P, and K, with R2 values between 0.77 and 0.85 and RPD values ranging from 2 to 3. In contrast, Silt, P, and K demonstrated average predictive performance, with R2 values ranging from 0.41 to 0.72 and RPD values below 2.
The ablation experiments reveal that incorporating the CNN model substantially enhances the prediction performance for Clay, Silt, Sand, P, and K. Notably, the inclusion of CNN leads to significant improvements in Phosphorus (R2 increased by 21 percentage points) and Potassium (R2 increased by 17 percentage points). This advancement highlights the method’s progress in integrating global and local information, optimizing feature extraction, and refining model fusion. According to the tabular data, the proposed Transformer-CNN method shows superior performance in predicting several key attributes, including N with an R2 of 0.94, OC with an R2 of 0.96, and CaCO3 with an R2 of 0.96. These results underscore the distinctive advantages of the Transformer-CNN method in synthesizing diverse information sources, optimizing multi-scale feature extraction, and enhancing prediction accuracy. Figure 3 illustrates scatter plots comparing the measured and predicted values for the eleven soil attributes using the Transformer-CNN model.

3.2. Performance Comparison with Other Models

To further evaluate the prediction ability of the Transformer-CNN model for LUCAS soil properties, we first used traditional machine learning models (PLSR, RFR, SVR) and deep learning models (LSTM, ResNet18). Traditional regression methods usually rely on linear assumptions, making it difficult to capture the complex interactions among soil properties [54]. RFR can handle a large number of input variables and reduce the risk of overfitting by integrating multiple decision trees, thereby improving the robustness and accuracy of the predictions. SVR optimizes predictions by defining the optimal hyperplane in a high-dimensional feature space, making it suitable for complex nonlinear problems. Deep learning models can handle nonlinear relationships and high-dimensional data. Compared to traditional regression methods, deep models show significant advantages in practical applications. They can effectively utilize large-scale datasets for training, learning, and adapting to variations in soil properties and complex relationships [55].
RFR is capable of handling large-scale data. Table 5 shows that RFR in traditional machine learning outperforms PLSR and SVR in most properties, comparing the performance of different machine learning models in predicting various soil properties. The results show that the ResNet18 model significantly outperforms traditional machine learning models in predicting most soil properties, with an average R2 of 0.77. In comparison, the PLSR model has an average R2 of 0.67, and ResNet18 outperforms PLSR by about 14.93 percentage points. The RFR model has an average R2 of 0.74, and ResNet18 outperforms RFR by about 4.05 percentage points. The average R2 of the SVR model is 0.68, and ResNet18 is about 13.24 percentage points higher than SVR. The average R2 of the LSTM model is 0.64, and ResNet18 is about 19.88 percentage points higher than LSTM. Overall, ResNet18 shows excellent performance in predicting clay, silt, sand, pH, CEC, organic carbon, calcium carbonate, and nitrogen content, significantly outperforming the other models and demonstrating the strong potential and advantages of deep learning in soil property prediction. Figure 4 compares the R2 and RMSE (The closer to the edge, the closer the value of RMSE is to 0.) of the Transformer-CNN model with other methods in a radar chart fashion. It can be seen that the Transformer-CNN model outperforms other machine learning and deep learning models.

3.3. Effects of Data Pre-Processing

In data preprocessing, SG smoothing and differentiation techniques significantly affect data processing and analysis. SG smoothing is a commonly used technique that reduces noise and fluctuations by averaging over a sliding window, resulting in smoother and more stable data. However, SG smoothing may over-smooth the data, blurring or losing detailed information, and may yield unsatisfactory results, especially with complex patterns or rapid changes. In contrast, differentiation techniques, especially first-order differentiation, more accurately capture trend and slope information. First-order differentiation helps identify local extremes and change points, aiding in the accurate analysis of dynamic characteristics and trends. Compared to SG smoothing, first-order differentiation retains more original data features, reduces additional errors, and performs better in many data analysis tasks.
This experiment used filtering (SG1-11) and first-order differentiation with a first-order polynomial fit and 11 window sizes. We compared pre- and post-treatments, and the predictive effectiveness of all soil properties improved by an average of 3 percentage points. Although the SG method effectively reduces spectral signal noise, it may not eliminate all noise forms. First-order discretization methods are commonly used to eliminate noise caused by background inconsistencies or drift [56]. The importance of SG smoothing is demonstrated by numerous studies using LUCAS datasets to predict various soil properties [23,57,58]. When choosing an appropriate preprocessing technique, consider the data characteristics, analysis purpose, and required precision and accuracy. SG smoothing suits scenarios requiring noise reduction and curve smoothing, while first-order differentiation is better for preserving data details and dynamic changes. Correct selection and application of preprocessing techniques can significantly improve data analysis effectiveness and result reliability.

4. Discussion

To further evaluate the predictive performance of the proposed modeling approach for soil attribute content, we will compare it with findings from other related studies. First, our study demonstrated significant advantages in predicting different soil attributes. For example, in predicting N, P, and K contents, our model achieved R2 values of 0.94, 0.41, and 0.60, respectively, outperforming other studies. This indicates that our model has higher accuracy and reliability in capturing and predicting soil nutrient contents. For OC, our study also achieved impressive results. Our results in most soil prediction accuracies were higher than those in other studies. This demonstrates that our model performs better in predicting soil properties and helps analyze soil trends more precisely.
Our experiments were designed to be reproducible, but the magnitude of the experimental RMSE compared to other literature could not be effectively compared due to errors caused by the sampling method used to split the dataset. This may limit the ability to reproduce the exact experimental results in this study. Table 6 shows that Zong et al. [23] achieved a superior R2 for pH in H2O compared to our model, but we obtained good results for other soil properties. According to Figure 5, the matrix heat map clearly shows the comparison of our model with other research models’ R2 values. It demonstrates higher prediction accuracy and reliability compared to other studies. This indicates that our method has significant advantages in predicting the content of key organic and inorganic substances in soil. Our study demonstrated significant advantages in the prediction and analysis of multiple soil attributes, not only in terms of better prediction accuracy but also in capturing and interpreting the complexity of soil data more comprehensively.
A study [35] employed a CNN model to predict six soil properties (OC, CEC, Clay, Sand, pH, and N) without using additional spectral preprocessing methods. Although this approach showed improvements over traditional methods, a single model could not comprehensively extract spectral features. Another study [36] used the combined SG smoothing filter (with a second-order polynomial fit and a window size of 11 data points) and a first-order derivative transform to predict seven soil properties (Clay, Sand, Silt, pH, OC, CaCO3, and N), resulting in improved predictive performance compared to previous methods. In this experiment, we employed a one-dimensional convolutional neural network (1D-CNN) for modeling. While two-dimensional convolutional neural networks (2D-CNNs) for spectral image processing have proven effective [34], our study remained consistent by using both organic and mineral soils from the dataset. However, their study did not incorporate spectral preprocessing steps, which could help reduce experimental uncertainty. Our method improved the R2 for six soil properties (OC, N, CEC, pH, Sand, and Clay) in the test set. Although our results might be enhanced by spectral preprocessing, recent research [37] investigating seven soil properties (OC, CaCO3, N, CEC, pH, Clay, and Sand) employed seven preprocessing methods. While this approach may improve prediction accuracy from a preprocessing perspective, model performance could be heavily dependent on the choice of preprocessing methods. Such dependence may lead to inconsistent results and raise concerns about model stability. Variability among different preprocessing methods could cause significant differences in model performance under various conditions, increasing result variability and uncertainty. Compared to previous studies using the LUCAS dataset, we utilized the complete dataset and common preprocessing methods to predict all 11 soil properties. This approach reduces the uncertainty associated with excessive reliance on preprocessing, enhances model stability, and improves the reliability of predictive outcomes, providing a more comprehensive and robust predictive framework for readers.
In this experiment, silt, sand, K, and P were poorly predicted. However, Osayande Pascal Omondiagbe et al. [17] used Adapted-PBT to optimize CNN for predicting soil texture (three soil properties) with significant results for clay (R2 = 0.90, RMSE = 4.2), silt (R2 = 0.78, RMSE = 11.4), and sand (R2 = 0.81, RMSE = 0.3). Additionally, Wan et al. [27] utilized LUCAS to enhance feature extraction for small datasets, achieving good results: available nitrogen (R2 = 0.941, RMSE = 3.873), available phosphorus (R2 = 0.926, RMSE = 3.684), and available potassium (R2 = 0.903, RMSE = 3.422). These studies demonstrated good application results using the LUCAS dataset. Our model requires a large amount of data, and its performance is not satisfactory with small datasets. Our next goal is to utilize the LUCAS dataset to enhance the prediction of soil properties with a small amount of data. Training the Transformer-CNN model using an NVIDIA GeForce GTX 4090 requires 8 ms/epoch, the Transformer model requires 5 ms/epoch, CNN requires 6 ms/epoch, LSTM requires 16 ms/epoch, and ResNet18 requires 33 ms/epoch. Although this may be considered slow compared to traditional preprocessing methods, it is tolerable. The current studies around the LUCAS dataset are monolithic because they are all based on spectral information collected indoors, which is different in future large-scale outdoor studies. Future research should be performed to evaluate the validity of the proposed methods on soil spectra collected in the field with portable spectrometers or drones, which are more challenging than laboratory data. With a focus on reducing computational complexity and enhancing model robustness on the experimental side, there is a need to further investigate the performance of other deep learning algorithms such as semi-supervised learning and self-supervised learning. The effect of the two-dimensional transformation of spectra on the experimental results is also explored, and as soil spectral databases continue to emerge in different parts of the world, we will continue to investigate the effectiveness of our model on other spectral databases.

5. Conclusions

In this paper, we first summarize the previous research on the LUCAS dataset and analyze the challenges of predicting soil properties using the LUCAS spectral database. A deep learning method combining Transformer and CNN is proposed for the simultaneous prediction of soil properties from Vis-NIR spectral data. Experimental validation on the LUCAS dataset shows that by combining the advantages of Transformer and CNN, the method captures long-range dependencies and local features in the spectral data, significantly improving the prediction accuracy and stability of soil properties. Savitzky–Golay smoothing and first-order differential preprocessing methods play an important role in reducing spectral noise and improving model prediction performance. We predicted all 11 soil properties in the LUCAS dataset, and the parallel model outperformed a single model and made better predictions than traditional machine learning. Although this study demonstrated the great potential of the fusion model, there is still room for improvement in computational efficiency, model complexity, and application breadth. Unlike laboratory data, VNIR spectra collected in the field are affected by a variety of environmental factors such as weather, light intensity, and humidity. These factors may introduce higher variability in the data, thus complicating the prediction of soil properties. Based on the favorable results obtained in this study, we will use the more challenging field-collected soil spectra to evaluate our model in future studies.
Future research should continue to optimize the model structure, explore more application scenarios, and promote its use in real agricultural production. Enhancing the prediction capability of soil properties can improve soil management practices, promote sustainable agricultural development, and achieve significant ecological and economic benefits. The Transformer and CNN fusion model proposed in this paper provides an effective and robust new approach for hyperspectral soil property prediction. Through further research and optimization, this method is expected to play a greater role in soil monitoring and agricultural management, helping achieve the goals of healthy soil and sustainable agriculture.

Author Contributions

L.C.: Data curation, Conceptualization, Methodology, Writing—original draft. M.S.: Conceptualization, Methodology, Writing—review and editing. Z.Y.: Data curation, Writing—review and editing. D.J.: Data curation, Writing—review and editing. D.Y.: Data curation, Writing—review and editing. Y.D.: Writing—review and editing, Supervision, Project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This article was supported by the National Natural Science Foundation of China (Grant No. U19A2061), Agricultural image recognition and processing team, Jilin Provincial Science and Technology Department of young and middle-aged scientific and technological innovation and entrepreneurship excellence talent (team) project (innovation category) (Grant No. 20220508133RC), Jilin Province Science and Technology Development Plan Project (Grant No.20210404020NC).

Data Availability Statement

The LUCAS topsoil dataset used in this work was made available by the European Commission through the European Soil Data Centre managed by the Joint Research Centre (http://esdac.jrc.ec.europa.eu/ accessed on 22 August 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. European, Commission, Research Directorate-General for, Innovation; Veerman, C.; Correia, T.P.; Bastioli, C.; Biro, B.; Bouma, J.; Cienciala, E.; Emmett, B.; Frison, E.; Grand, A.; et al. Caring for Soil Is Caring for Life—Ensure 75% of Soils Are Healthy by 2030 for Food, People, Nature and Climate—Report of the Mission Board for Soil Health and Food; Publications Office: Luxembourg, 2020.
  2. Deel, H.L.; Moore, J.M.; Manter, D.K. Semwise: A National Soil Health Scoring Framework for Agricultural Systems. Appl. Soil Ecol. 2024, 195, 105273. [Google Scholar] [CrossRef]
  3. Abhiram, G.; Grafton, M.; Jeyakumar, P.; Bishop, P.; Davies, C.E.; McCurdy, M. The Nitrogen Dynamics of Newly Developed Lignite-Based Controlled-Release Fertilisers in the Soil-Plant Cycle. Plants 2022, 11, 3288. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, J.; Dyck, M.; A Quideau, S.; E Norris, C. Assessment of Soil Health and Identification of Key Soil Health Indicators for Five Long-Term Crop Rotations with Varying Fertility Management. Geoderma 2024, 443, 116836. [Google Scholar] [CrossRef]
  5. Kumar, M.; Bhattacharya, B.K.; Pandya, M.R.; Handique, B. Machine Learning Based Plot Level Rice Lodging Assessment Using Multi-Spectral Uav Remote Sensing. Comput. Electron. Agric. 2024, 219, 108754. [Google Scholar] [CrossRef]
  6. Zhou, W.; Yang, H.; Xie, L.; Li, H.; Huang, L.; Zhao, Y.; Yue, T. Hyperspectral Inversion of Soil Heavy Metals in Three-River Source Region Based on Random Forest Model. CATENA 2021, 202, 105222. [Google Scholar] [CrossRef]
  7. Zhao, M.; Gao, Y.; Lu, Y.; Wang, S. Hyperspectral Modeling of Soil Organic Matter Based on Characteristic Wavelength in East China. Sustainability 2022, 14, 8455. [Google Scholar] [CrossRef]
  8. Shen, L.; Gao, M.; Yan, J.; Li, Z.-L.; Leng, P.; Yang, Q.; Duan, S.-B. Hyperspectral Estimation of Soil Organic Matter Content Using Different Spectral Preprocessing Techniques and Plsr Method. Remote. Sens. 2020, 12, 1206. [Google Scholar] [CrossRef]
  9. He, J.; He, J.; Liu, G.; Li, W.; Li, Z.; Li, Z. Inversion Analysis of Soil Nitrogen Content Using Hyperspectral Images with Different Preprocessing Methods. Ecol. Inform. 2023, 78, 102381. [Google Scholar]
  10. Fidêncio, P.H.; Poppi, R.J.; de Andrade, J.C.; Cantarella, H. Determination of Organic Matter in Soil Using near-Infrared Spectroscopy and Partial Least Squares Regression. Commun. Soil Sci. Plant Anal. 2002, 33, 1607–1615. [Google Scholar] [CrossRef]
  11. de Santana, F.B.; de Souza, A.M.; Poppi, R.J. Visible and near Infrared Spectroscopy Coupled to Random Forest to Quantify Some Soil Quality Parameters. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2018, 191, 454–462. [Google Scholar] [CrossRef]
  12. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (Cnn) in Vegetation Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  13. Yuan, F.; Zhang, Z.; Fang, Z. An Effective Cnn and Transformer Complementary Network for Medical Image Segmentation. Pattern Recognit. 2023, 136, 109228. [Google Scholar] [CrossRef]
  14. Xiong, X.; Zhu, T.; Zhu, Y.; Cao, M.; Xiao, J.; Li, L.; Wang, F.; Fan, C.; Pei, H. Molecular Convolutional Neural Networks with DNA Regulatory Circuits. Nat. Mach. Intell. 2022, 4, 625–635. [Google Scholar] [CrossRef]
  15. Zha, W.; Liu, Y.; Wan, Y.; Luo, R.; Li, D.; Yang, S.; Xu, Y. Forecasting Monthly Gas Field Production Based on the Cnn-Lstm Model. Energy 2022, 260, 124889. [Google Scholar] [CrossRef]
  16. Yasin, M.; Sarigül, M.; Avci, M. Logarithmic Learning Differential Convolutional Neural Network. Neural Netw. 2024, 172, 106114. [Google Scholar] [CrossRef]
  17. Omondiagbe, O.P.; Lilburne, L.; Licorish, S.A.; MacDonell, S.G. Soil Texture Prediction with Automated Deep Convolutional Neural Networks and Population-Based Learning. Geoderma 2023, 436, 116521. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Su, C.; Wu, J.; Liu, H.; Xie, M. Trend-Augmented and Temporal-Featured Transformer Network with Multi-Sensor Signals for Remaining Useful Life Prediction. Reliab. Eng. Syst. Saf. 2024, 241, 109662. [Google Scholar] [CrossRef]
  19. Cui, B.; Liu, M.; Li, S.; Jin, Z.; Zeng, Y.; Lin, X. Deep Learning Methods for Atmospheric Pm2.5 Prediction: A Comparative Study of Transformer and Cnn-Lstm-Attention. Atmos. Pollut. Res. 2023, 14, 101833. [Google Scholar] [CrossRef]
  20. Xiang, F.; Zhang, Y.; Zhang, S.; Wang, Z.; Qiu, L.; Choi, J.-H. Bayesian Gated-Transformer Model for Risk-Aware Prediction of Aero-Engine Remaining Useful Life. Expert Syst. Appl. 2024, 238, 121859. [Google Scholar] [CrossRef]
  21. Toth, G.; Jones, A.; Montanarella, L.; Alewell, C.; Ballabio, C.; Carre, F.; De, B.D.; Guicharnaud, R.A.; Gardi, C.; Hermann, T.; et al. LUCAS Topoil Survey—Methodology, Data and Results; Publications Office of the European Union: Luxembourg, 2013. [Google Scholar] [CrossRef]
  22. Tsimpouris, E.; Tsakiridis, N.L.; Theocharis, J.B. Using Autoencoders to Compress Soil Vnir–Swir Spectra for More Robust Prediction of Soil Properties. Geoderma 2021, 393, 114967. [Google Scholar] [CrossRef]
  23. Zhong, L.; Guo, X.; Xu, Z.; Ding, M. Soil Properties: Their Prediction and Feature Extraction from the Lucas Spectral Library Using Deep Convolutional Neural Networks. Geoderma 2021, 402, 115366. [Google Scholar] [CrossRef]
  24. Singh, S.; Kasana, S.S. Quantitative Estimation of Soil Properties Using Hybrid Features and Rnn Variants. Chemosphere 2022, 287, 131889. [Google Scholar] [CrossRef]
  25. Gruszczyński, S.; Gruszczyński, W. Supporting Soil and Land Assessment with Machine Learning Models Using the Vis-Nir Spectral Response. Geoderma 2022, 405, 115451. [Google Scholar] [CrossRef]
  26. Tavakoli, H.; Correa, J.; Sabetizade, M.; Vogel, S. Predicting Key Soil Properties from Vis-Nir Spectra by Applying Dual-Wavelength Indices Transformations and Stacking Machine Learning Approaches. Soil Tillage Res. 2023, 229, 105684. [Google Scholar] [CrossRef]
  27. Wan, M.; Yan, T.; Xu, G.; Liu, A.; Zhou, Y.; Wang, H.; Jin, X. Mae-Nir: A Masked Autoencoder That Enhances near-Infrared Spectral Data to Predict Soil Properties. Comput. Electron. Agric. 2023, 215, 108427. [Google Scholar] [CrossRef]
  28. Wangeci, A.; Adén, D.; Nikolajsen, T.; Greve, M.H.; Knadel, M. Comparing Laser-Induced Breakdown Spectroscopy and Visible near-Infrared Spectroscopy for Predicting Soil Properties: A Pan-European Study. Geoderma 2024, 444, 116865. [Google Scholar] [CrossRef]
  29. Wang, Y.; Feng, K.; Sun, L.; Xie, Y.; Song, X.-P. Satellite-Based Soybean Yield Prediction in Argentina: A Comparison between Panel Regression and Deep Learning Methods. Comput. Electron. Agric. 2024, 221, 108978. [Google Scholar] [CrossRef]
  30. Liu, S.; Xu, T.; Du, X.; Zhang, Y.; Wu, J. A Hybrid Deep Learning Model Based on Parallel Architecture Tcn-Lstm with Savitzky-Golay Filter for Wind Power Prediction. Energy Convers. Manag. 2024, 302, 118122. [Google Scholar] [CrossRef]
  31. Li, T.; Dong, X.; Lin, J.; Peng, Y.; Li, T.; Dong, X.; Lin, J.; Peng, Y.; Li, T.; Dong, X.; et al. A Transformer-Cnn Parallel Network for Image Guided Depth Completion. Pattern Recognit. 2024, 150, 110305. [Google Scholar] [CrossRef]
  32. Sun, W.; Chang, L.C.; Chang, F.J. Deep Dive into Predictive Excellence: Transformer’s Impact on Groundwater Level Prediction. J. Hydrol. 2024, 636, 131250. [Google Scholar] [CrossRef]
  33. Al-Ali, E.M.; Hajji, Y.; Said, Y.; Hleili, M.; Alanzi, A.M.; Laatar, A.H.; Atri, M. Solar Energy Production Forecasting Based on a Hybrid Cnn-Lstm-Transformer Model. Mathematics 2023, 11, 676. [Google Scholar] [CrossRef]
  34. Jin, X.; Zhou, J.; Rao, Y.; Zhang, X.; Zhang, W.; Ba, W.; Zhou, X.; Zhang, T. An Innovative Approach for Integrating Two-Dimensional Conversion of Vis-Nir Spectra with the Swin Transformer Model to Leverage Deep Learning for Predicting Soil Properties. Geoderma 2023, 436, 116555. [Google Scholar] [CrossRef]
  35. Padarian, J.; Minasny, B.; McBratney, A.B. Using Deep Learning to Predict Soil Properties from Regional Spectral Data. Geoderma Reg. 2019, 16, e00198. [Google Scholar] [CrossRef]
  36. Hosseinpour-Zarnaq, M.; Omid, M.; Sarmadian, F.; Ghasemi-Mobtaker, H. A Cnn Model for Predicting Soil Properties Using Vis–Nir Spectral Data. Environ. Earth Sci. 2023, 82, 382. [Google Scholar] [CrossRef]
  37. Feng, G.; Li, Z.; Zhang, J.; Wang, M. Multi-Scale Spatial Attention-Based Multi-Channel 2d Convolutional Network for Soil Property Prediction. Sensors 2024, 14, 4728. [Google Scholar] [CrossRef]
  38. Tóth, G.; Jones, A.; Montanarella, L. The Lucas Topsoil Database and Derived Information on the Regional Variability of Cropland Topsoil Properties in the European Union. Environ. Monit. Assess. 2013, 185, 7409–7425. [Google Scholar] [CrossRef]
  39. Savitzky, A.; Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  40. Mehmed, K. Data Mining: Concepts, Models, Methods, and Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  41. Minasny, B.; McBratney, A.B. A Conditioned Latin Hypercube Method for Sampling in the Presence of Ancillary Information. Comput. Geosci. 2006, 32, 1378–1388. [Google Scholar] [CrossRef]
  42. Bezdek, J.C.; Ehrlich, R.; Full, W. Fcm: The Fuzzy C-Means Clustering Algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
  43. Wen, Y.; Xu, P.; Li, Z.; Xu, W.; Wang, X. Rpconvformer: A Novel Transformer-Based Deep Neural Networks for Traffic Flow Prediction. Expert Syst. Appl. 2023, 218, 119587. [Google Scholar] [CrossRef]
  44. Gu, X.; See, K.; Li, P.; Shan, K.; Wang, Y.; Zhao, L.; Lim, K.C.; Zhang, N. A Novel State-of-Health Estimation for the Lithium-Ion Battery Using a Convolutional Neural Network and Transformer Model. Energy 2023, 262, 125501. [Google Scholar] [CrossRef]
  45. Chandra, A.; Tünnermann, L.; Löfstedt, T.; Gratz, R. Transformer-Based Deep Learning for Predicting Protein Properties in the Life Sciences. eLife 2023, 12, e82819. [Google Scholar] [CrossRef] [PubMed]
  46. Jia, S.; Min, Z.; Fu, X. Multiscale Spatial–Spectral Transformer Network for Hyperspectral and Multispectral Image Fusion. Inf. Fusion 2023, 96, 117–129. [Google Scholar] [CrossRef]
  47. Wu, N.; Green, B.; Ben, X.; O’Banion, S. Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case. arXiv 2020, arXiv:2001.08317. [Google Scholar]
  48. Qu, K.; Si, G.; Shan, Z.; Kong, X.; Yang, X. Short-Term Forecasting for Multiple Wind Farms Based on Transformer Model. Energy Rep. 2022, 8, 483–490. [Google Scholar] [CrossRef]
  49. Nascimento, E.G.S.; de Melo, T.A.; Moreira, D.M. A Transformer-Based Deep Neural Network with Wavelet Transform for Forecasting Wind Speed and Wind Energy. Energy 2023, 278, 127678. [Google Scholar] [CrossRef]
  50. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  51. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, Cnn Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  52. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef]
  53. Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U.R.; et al. A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
  54. Jiang, C.; Zhao, J.; Li, G. Integration of Vis–Nir Spectroscopy and Machine Learning Techniques to Predict Eight Soil Parameters in Alpine Regions. Agronomy 2023, 13, 2816. [Google Scholar] [CrossRef]
  55. Li, X.; Li, Z.; Qiu, H.; Chen, G.; Fan, P. Soil Carbon Content Prediction Using Multi-Source Data Feature Fusion of Deep Learning Based on Spectral and Hyperspectral Images. Chemosphere 2023, 336, 139161. [Google Scholar] [CrossRef]
  56. Yang, C.; Feng, M.; Song, L.; Jing, B.; Xie, Y.; Wang, C.; Yang, W.; Xiao, L.; Zhang, M.; Song, X. Study on Hyperspectral Monitoring Model of Soil Total Nitrogen Content Based on Fractional-Order Derivative. Comput. Electron. Agric. 2022, 201, 107307. [Google Scholar] [CrossRef]
  57. Dharumarajan, S.; Gomez, C.; Lalitha, M.; Kalaiselvi, B.; Vasundhara, R.; Hegde, R. Soil Order Knowledge as a Driver in Soil Properties Estimation from Vis-Nir Spectral Data—Case Study from Northern Karnataka (India). Geoderma Reg. 2023, 32, e00596. [Google Scholar] [CrossRef]
  58. Kok, M.; Sarjant, S.; Verweij, S.; Vaessen, S.F.; Ros, G.H. On-Site Soil Analysis: A Novel Approach Combining Nir Spectroscopy, Remote Sensing and Deep Learning. Geoderma 2024, 446, 116903. [Google Scholar] [CrossRef]
  59. Singh, S.; Kasana, S.S. Estimation of Soil Properties from the Eu Spectral Library Using Long Short-Term Memory Networks. Geoderma Reg. 2019, 18, e00233. [Google Scholar] [CrossRef]
Figure 1. The raw spectra are shown on the left, and the spectra after preprocessing are shown on the right, with each image containing 19,036 spectral lines.
Figure 1. The raw spectra are shown on the left, and the spectra after preprocessing are shown on the right, with each image containing 19,036 spectral lines.
Agronomy 14 01998 g001
Figure 2. Transformer Prediction Module.
Figure 2. Transformer Prediction Module.
Agronomy 14 01998 g002
Figure 3. Scatter plot of true and predicted values.
Figure 3. Scatter plot of true and predicted values.
Agronomy 14 01998 g003
Figure 4. The radar plot of R2 is shown on the left, while the radar plot of RMSE is displayed on the right.
Figure 4. The radar plot of R2 is shown on the left, while the radar plot of RMSE is displayed on the right.
Agronomy 14 01998 g004
Figure 5. Related Documents R2 Comparison Effect Graph (Related Documents: (Singh et al., 2019) [59], (Zhong et al., 2021) [23], (Gruszczyński et al., 2022) [25], Tavakoli et al., 2023) [26].).
Figure 5. Related Documents R2 Comparison Effect Graph (Related Documents: (Singh et al., 2019) [59], (Zhong et al., 2021) [23], (Gruszczyński et al., 2022) [25], Tavakoli et al., 2023) [26].).
Agronomy 14 01998 g005
Table 1. Descriptive statistics of 11 soil properties in the LUCAS dataset.
Table 1. Descriptive statistics of 11 soil properties in the LUCAS dataset.
Soil PropertiesSetMinimumMaximumMeanStandard DeviationCoefficient of Variation
clay (%)Complete079.0018.8912.620.67
Training079.0018.9512.590.66
Test077.0018.7812.680.68
silt (%)Complete092.0038.2117.760.46
Training092.0038.2817.800.46
Test1.0088.0038.0817.690.46
sand (%)Complete1.0099.0042.8925.340.59
Training1.0099.0042.7725.350.59
Test1.0099.0043.1325.330.59
pH in CaCl2Complete2.579.255.591.430.25
Training2.579.255.591.420.25
Test2.628.645.601.430.26
pH in H2OComplete3.2110.086.201.350.22
Training3.2110.086.201.350.22
Test3.409.656.201.350.22
CEC (cmol(+)/kg)Complete0234.0015.7614.480.92
Training0234.0015.6814.040.90
Test0227.7015.9015.330.96
OC (g/kg)Complete0586.8050.091.301.83
Training0586.4050.191.071.82
Test0586.8049.991.771.84
CaCO3 (g/kg)Complete0944.0051.60125.312.43
Training0944.0050.68123.882.44
Test0865.0053.43128.122.40
N (mg/kg)Complete038.602.923.751.28
Training036.202.933.741.28
Test038.602.913.781.30
P (mg/kg)Complete01366.4030.0832.851.09
Training01366.4029.9133.531.12
Test0575.1030.4231.441.03
K (mg/kg)Complete07342.00197.04229.291.16
Training07342.00197.10237.571.21
Test04902.40196.92211.771.08
Table 2. Model framework parameterization.
Table 2. Model framework parameterization.
Hyper-ParametersSetting
Learning rate0.001
OptimiserAdam
Dense Layer parameters128/256/1
Batch size1024
Loss FunctionMSE
Epoch2000
Patience200
Dropout0.1
Cov1D filters64/128/256
Cov1D strides2
Activationleaky_relu
Table 3. Classification tiers of the performance-to-deviation ratio (RPD).
Table 3. Classification tiers of the performance-to-deviation ratio (RPD).
RPDMeaningLevel
RPD > 3Excellent ModelA
2.5 ≤ RPD ≤ 3.0Good ModelB
2.0 ≤ RPD ≤ 2.5Approximate ModelC
RPD < 2.0Unsatisfactory ModelD
Table 4. The effect of ablation experiments.
Table 4. The effect of ablation experiments.
SoilTransformerCNNTransformer-CNN
PropertiesR2RMSERPDR2RMSERPDR2RMSERPD
Clay0.854.912.580.785.892.150.854.862.61
Silt0.719.571.850.6011.221.580.729.401.88
Sand0.7712.252.070.6614.691.720.7712.082.10
pH in CaCl20.910.423.400.920.403.570.940.363.99
pH in H2O0.900.423.220.910.393.420.930.363.77
CEC0.826.432.390.777.332.090.836.362.41
OC0.9520.084.570.9422.804.020.9619.474.71
CaCO30.9625.994.930.9431.644.050.9624.855.16
N0.930.973.890.921.093.460.940.934.07
P0.4024.321.290.2028.041.120.4124.191.30
K0.58137.611.540.43160.131.320.60133.931.58
Table 5. Compare and contrast the effects of the experiment.
Table 5. Compare and contrast the effects of the experiment.
Soil PropertiesPLSRRFRSVRLSTMResNet18
R2RMSERPDR2RMSERPDR2RMSERPDR2RMSERPDR2RMSERPD
Clay0.706.991.810.726.751.880.716.781.860.707.291.740.776.022.11
Silt0.4113.621.300.5112.191.450.4513.181.350.4312.371.320.6210.911.62
Sand0.5217.401.460.6115.911.590.5516.901.500.5317.341.460.6614.671.73
pH in CaCl20.840.582.470.840.532.520.870.522.760.870.512.830.890.452.98
pH in H2O0.830.562.430.850.572.570.860.502.710.810.582.320.870.482.79
CEC0.698.581.790.728.041.910.629.511.480.659.061.690.767.482.05
OC0.8930.253.030.9323.513.900.8930.722.960.9127.573.330.9422.194.14
CaCO30.9040.053.200.9236.773.480.9040.003.100.8942.423.020.9430.314.23
N0.831.552.440.881.332.830.811.622.310.871.372.760.891.243.05
P0.2227.751.130.2327.561.140.2527.081.240.0929.981.050.2726.861.17
K0.36169.921.250.31175.771.200.36169.151.250.21188.301.120.48152.021.39
Table 6. Effectiveness in comparison with other literature.
Table 6. Effectiveness in comparison with other literature.
Soil PropertiesTransformer-CNN(Singh et al., 2019) [59](Zhong et al., 2021) [23](Gruszczyński et al., 2022) [25](Tavakoli et al., 2023) [26]
R2RMSERPDR2RMSER2RMSER2RMSER2RMSE
Clay0.854.862.610.805.95NANA0.845.1NANA
Silt0.729.401.88NANANANANANANANA
Sand0.7712.082.100.7114.59NANANANA0.7313.41
pH in CaCl20.940.363.99NANANANA0.920.740.940.35
pH in H2O0.930.363.770.900.420.940.33NANANANA
CEC0.836.362.410.776.750.826.140.834.00.806.89
OC0.9619.474.710.9423.250.9519.840.798.90.9521.34
CaCO30.9624.855.16NANA0.9624.910.9627.40.9625.71
N0.940.934.070.911.150.930.960.790.60.921.11
P0.4124.191.30NANA0.3724.65NANANANA
K0.60133.931.58NANA0.59131.07NANANANA
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, L.; Sun, M.; Yang, Z.; Jiang, D.; Yin, D.; Duan, Y. A Novel Transformer-CNN Approach for Predicting Soil Properties from LUCAS Vis-NIR Spectral Data. Agronomy 2024, 14, 1998. https://doi.org/10.3390/agronomy14091998

AMA Style

Cao L, Sun M, Yang Z, Jiang D, Yin D, Duan Y. A Novel Transformer-CNN Approach for Predicting Soil Properties from LUCAS Vis-NIR Spectral Data. Agronomy. 2024; 14(9):1998. https://doi.org/10.3390/agronomy14091998

Chicago/Turabian Style

Cao, Liying, Miao Sun, Zhicheng Yang, Donghui Jiang, Dongjie Yin, and Yunpeng Duan. 2024. "A Novel Transformer-CNN Approach for Predicting Soil Properties from LUCAS Vis-NIR Spectral Data" Agronomy 14, no. 9: 1998. https://doi.org/10.3390/agronomy14091998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop