Next Article in Journal
Neglected and Underutilized Plant Species (NUS) from the Apulia Region Worthy of Being Rescued and Re-Included in Daily Diet
Next Article in Special Issue
Morphological, Physiological, and Biochemical Responses of Zinnia to Drought Stress
Previous Article in Journal
Untargeted GC-TOFMS Analysis Reveals Metabolomic Changes in Salvia miltiorrhiza Bunge Leaf and Root in Response to Long-Term Drought Stress
Previous Article in Special Issue
Cowpea Ecophysiological Responses to Accumulated Water Deficiency during the Reproductive Phase in Northeastern Pará, Brazil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Water Stress in Potato Plants Using Hyperspectral Imagery and Machine Learning Algorithms

by
Julio Martin Duarte-Carvajalino
1,*,
Elías Alexander Silva-Arero
1,
Gerardo Antonio Góez-Vinasco
1,
Laura Marcela Torres-Delgado
2,
Oscar Dubán Ocampo-Paez
2 and
Angela María Castaño-Marín
1
1
Corporación Colombiana de Investigación Agropecuaria—Agrosavia, Centro de Investigación Tibaitatá—Km 14 Vía Mosquera, Bogotá D.C. 250047, Cundinamarca, Colombia
2
Universidad Nacional de Colombia, Carrera 45 N° 26-85–Edificio Uriel Gutiérrez, Bogotá D.C. 111321, Cundinamarca, Colombia
*
Author to whom correspondence should be addressed.
Horticulturae 2021, 7(7), 176; https://doi.org/10.3390/horticulturae7070176
Submission received: 18 May 2021 / Revised: 25 June 2021 / Accepted: 28 June 2021 / Published: 2 July 2021
(This article belongs to the Special Issue Drought Stress in Horticultural Plants)

Abstract

:
This work presents quantitative detection of water stress and estimation of the water stress level: none, light, moderate, and severe on potato crops. We use hyperspectral imagery and state of the art machine learning algorithms: random decision forest, multilayer perceptron, convolutional neural networks, support vector machines, extreme gradient boost, and AdaBoost. The detection and estimation of water stress in potato crops is carried out on two different phenological stages of the plants: tubers differentiation and maximum tuberization. The machine learning algorithms are trained with a small subset of each hyperspectral image corresponding to the plant canopy. The results are improved using majority voting to classify all the canopy pixels in the hyperspectral images. The results indicate that both detection of water stress and estimation of the level of water stress can be obtained with good accuracy, improved further by majority voting. The importance of each band of the hyperspectral images in the classification of the images is assessed by random forest and extreme gradient boost, which are the machine learning algorithms that perform best overall on both phenological stages and detection and estimation of water stress in potato crops.

1. Introduction

Potato (Solanum tuberosum L.) is the third most important food crop in the world [1]. The potato provides an economic and rich source of carbohydrates and it is included in the diet of both developed and undeveloped countries. Water deficit is the most important abiotic stress affecting the development, productivity, and quality of potato cultivars [2]. Hence, it is important to detect, as early as possible, signs of water stress in potato plants avoiding production and quality losses. Due to climate change, crops worldwide are suffering from unexpected and longer severe weather changes such as droughts, which are becoming increasingly more intense [3]. Specifically in Colombia, a good portion of areas suitable for potato production are vulnerable to increased aridity, soil erosion, desertification, and variations in the hydrological system as a consequence of climate change [4]. Therefore, there is a need to map water stress in potato crops using non-destructive technologies such as remote sensing.
Recently, a spectroradiometer (350–2500 nm) was used to explore the effect of water stress on the spectral reflectance of bermudagrass and five vegetation indexes were studied [5]. In the case of potato crops, 12 vegetation indexes including four Normalized Water Indexes (NWIs), have been studied to detect water stress in potato leaves under different watering conditions using also a spectroradiometer (350–2500 nm) [4]. The results indicate clear differences in the spectrum of water-stressed leaves in the 700–1300 nm range [4]. Remote sensing technologies using unmanned aerial vehicles (UAVs) acquiring visible and thermal images were used to map water stress in barley crops [6]. The detection of water stress in plants using aerial imagery has focused on thermal imagery to estimate plant temperature relative to the air temperature computing NWIs. Since stomata close under water stress, the temperature of the leaves relative to the air increases [6,7,8]. More recently, remote sensing imaging technologies using visible, near-infrared (NIR), short wave infrared (SWIR), and thermography have been proposed to detect water stress in potato crops [9]. Rather than using broadband multispectral images, hyperspectral imagery and machine learning algorithms have been proposed to determine the quality of food products [10]. Hyperspectral imagery (400–1000 nm) has also been proposed to detect water stress in potato crops using spectral indexes [11]. Hyperspectral imagery (400–2500 nm) was used in combination with partial least squares–discriminant analysis (PLS-DA) and partial least squares–support vector machine (PLS-SVM) classification to detect abiotic and biotic drought stress in tomato canopies [12]. Hyperspectral imagery (450–1000 nm) in combination with machine learning algorithms (random forest and extreme gradient boost) has been also used to detect water stress in vine canopies [13]. Another possibility for detecting water stress in plants is to use radar remote sensing technologies [14,15] with the advantage of penetrating the clouds, a limitation of visible and thermal imagery. Finally, ultrasound wave spectroscopy has also been used to estimate the water content of plant leaves using convolutional neural networks and random forest algorithms [16].
As previously indicated, work on detecting water stress in potato cultivars has been based on vegetation indexes (NDVI, the Simple Ratio, the Photochemical Reflectance Index, the pigment-specific simple ratio of Chlorophyll-a, the reflectance water index, the Normalized Water Indexes and the dry Zea N index). Here we use a hyperspectral camera (400–1000 nm) and several well-known machine learning algorithms to detect water stress in potato hyperspectral images and to estimate the degree of water stress: none, light, moderate and severe, using all images bands. The use of machine learning algorithms allows us to determine which regions in the spectral signature of the leaves are more influential to better estimate water stress from remote sensing using images in the visible (400–700 nm) and near-infrared (NIR) (700–1000 nm) bands.

2. Material and Methods

2.1. Plant Material and Experimental Design

The experiment was developed in greenhouse number 17 of AGROSAVIA (Corporación Colombiana de Investigación Agropecuaria), Tibaitatá research center, Colombia (4°41′25.7064′′ N, 74°12′08.23′′ W) at 2543 m above the sea level. Certified seeds of Solanum tuberosum L., variety Diacol Capiro were planted in the greenhouse. The experiment consisted of a randomized complete blocks design in a factorial 2 × 4 arrangement. The first factor considered was the level of plant development (phenological stage), this was fixed according to [17]: tubers differentiation (TD) and maximum tuberization (MT) (Appendix A). The second factor was the level of water stress severity, determined by the hydric potential of the leaves, measured using a Scholander pressure chamber in Mega Pascals (Mpa). Control plants have a hydric potential in the 0–−0.49 Mpa range, light (L) water stress has a hydric potential in the −0.5–−0.59 Mpa range, moderate (M) water stress has a hydric potential in the −0.6–−0.89 Mpa, and severe (S) water stress has a hydric potential equal to or lower than −0.9 Mpa. These hydric potential ranges were selected based on [18,19], and previous research experience of AGROSAVIA in greenhouses containing potato crops.
Potato plants were sown in a greenhouse in a loamy soil that was kept at field capacity (soil water potential did not decline below −0.033 MPa) by drip irrigation from sowing until the 9th and 13th week after sowing, when each stage of development was reached (TD and TF, respectively). At that time, the water supply was suspended, and the water potential in the leaf was measured daily until reaching each level of stress (L, M, S). Control plants had a water supply throughout the experiment.

2.2. Hyperspectral Imagery

The hyperspectral images were acquired using a 710-VP Surface Optics Corporation camera with 520 × 696 pixels and 128 spectral bands in the 400–1000 nm range, using the Environment for Visualizing Images (ENVI) format. The images were taken at 3 m above the plant’s canopy level and the camera looking downwards. The image acquisition campaigns were done at around the same hour of the day. Figure 1 shows a false-color image of the canopy of a plant loaded and visualized with MultiSpec [20]. As can be seen from this image a Spectralon reflectance white panel is also used on each image to convert the hyperspectral intensity images to reflectance. It is easy to segment the white Spectralon panel from the hyperspectral image by computing the average of the red, green, blue, and NIR bands and dividing that image by the maximum intensity. Figure 2 shows this normalized average, where the Spectralon reflectance panel can be segmented from the image using a threshold above 0.5.
The reflectance of each hyperspectral image can be computed using:
ρ ( x , y , λ ) = I ( x , y , λ ) ρ S ( λ ) I s ( λ )
where ρ ( x , y , λ ) is the reflectance image at pixel coordinates x , y and waveband λ , I ( x , y , λ ) is the raw intensity image at pixel coordinates x , y , and waveband λ , ρ S ( λ ) the known reflectance of the Spectralon panel at λ wavelength (0.99 at visible and NIR ranges) and I s ( λ ) the mean intensity of the Spectralon panel at waveband λ . Once the hyperspectral images are converted to reflectance, it is necessary to segment the canopy from its background. The Normalized Difference Vegetation Index (NDVI) has widely been used to detect vegetation canopy [21]:
N D V I = ρ N I R ρ r e d ρ N I R + ρ r e d
where ρ N I R , ρ r e d are the reflectances at the NIR and red wavelengths, respectively. However, the NDVI is affected by several factors including shadows [21] that could lead to 0/0 undefined values. To avoid this, we used the Soil-Adjusted Vegetation Index (SAVI) that overcomes the issues of the NDVI [21] and selected those values where SAVI > 0.3 (Figure 3):
S A V I = 1.5 ρ N I R ρ r e d ( 0.5 + ρ N I R + ρ r e d )
From the image campaign at the tubers differentiation phenological stage, 64 images were acquired to be used for the machine learning algorithms (stressed and control plants) with water stresses that range from 3 to 20 days. From the image campaign at the maximum tuberization phenological stage, 52 images were acquired to be used for the machine learning algorithms (stressed and control plants) with water stresses that range from zero to nine days. The reading and preprocessing of the hyperspectral images were done using Python 3.8.5 that comes with Anaconda [22]. The Python spectral library [23] was used to read the hyperspectral images.
There are control plants that provide images for the control class and there are several images for each stress condition, taken at different days after the application of each stress level.

2.3. Machine Learning Algorithms

Two supervised classification tasks for the two phenological stages of the potato crops were carried out: detection of water stress i.e., the plant is water-stressed or not (two classes) and the estimation of the water level of stress i.e., the plant is not water-stressed, is lightly water-stressed, is moderately stressed or severely stressed (four classes). To perform these classification tasks six well-known machine learning algorithms were used:
  • Random decision forest (RF) [24] using 100 trees, with a balanced class weight. RF are an ensemble of decision trees, the class predicted corresponds to the class most voted for the decision trees.
  • Multi-layer perceptron (MLP) [25] with an input layer having equal nodes as the number of bands (128) and an output layer having equal nodes as the number of classes (2 or 4). Each layer is followed by a batch normalization layer [26], a dropout layer [27] with a probability of 0.2, a rectified linear activation function (RELU, a function that will output the same input if it is positive, zero otherwise) [28] on the input layer, and a Softmax activation function [28] on the output layer for the case of four classes or a Sigmoid activation function [28] for the case of two classes classification (see Figure 4). An MLP neural network consists of layers of nodes: an input layer, hidden layers and an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. Each node on a layer connects with each node of the following layer by a weight function. The neural network learns the weights from the training data.
  • Convolutional neural networks (CNN) [29] with two convolutional layers using a kernel size of 3 and 20 filters each one. The two convolutional layers are followed by a batch normalization layer, a dropout (0.2) layer, and a RELU layer. After the two convolutional layers, a flatten layer follows to flatten out the last convolutional layer into MLP nodes. After the flatten layer, an input MLP layer of size equal to the half of nodes of the flatten layer follows, then a middle MLP layer with half the nodes of the previous layer and an output layer with equal nodes as the number of classes. Each MLP layer is followed by a batch normalization layer, a dropout (0.2) layer, and a RELU layer for the case of the first MLP layer and the second MLP layer. The last MLP layer is followed by a Softmax activation function in the case of four classes or a Sigmoid activation function, in the case of two classes (Figure 5). Convolutional neural networks are a kind deep learning neural network specialized on images, with convolutional layers applying different kinds of filters on patches of the images and then on previous convolutional layers, to capture variabilities at higher scales.
  • Support vector machine (SVM) [30] using linear SVM with default parameters. SVM maps training examples to points in space so as to maximize the width of the gap between the classes.
  • Extreme gradient boost (XGBoost) [31] using tree classifiers (gbtree) as weak learners and 100 estimators. Gradient boosting produces an ensemble of weak predictions (usually trees) models and generalizes them by the optimization of a differentiable loss function. XGBoost in an implementation of gradient boosting that uses a more regularized model formalization to control overfitting.
  • AdaBoost (AB) [32] with 100 estimators. An AdaBoost classifier works by fitting a classifier that first fits the dataset and then fits additional copies of the classifier, but giving more weight to the incorrectly classified instances, so subsequent classifiers focus on harder cases.
The RF, SVM, AB classifiers were implemented in Python 3.8 using the sklearn library. The MLP and CNN were implemented in Python 3.5 using the keras library with tensorflow under the hood in the High Performance Computing servers of Agrosavia, given the memory required by CNN. The XGBoost classifier was implemented using xgboost python library in Python 3.8.
Given the size of the images (520 × 696 × 128) and equipment memory constraints and processing times, only 10000 pixels were selected at random from the canopy (identified using SAVI > 0.3) on each image to train the classifiers forming a training dataset. In the case of CNN, a window of size 5 × 5 × 128 was selected centered on each one of the 10,000 pixels selected at random in the canopy to form the CNN dataset. To evaluate the classifiers, five-fold cross-validation was employed to measure the probability of classfication overfitting, due to the tendency of classifiers to overfit the training dataset. Here, 80% of the dataset is used for training and 20% for testing the classifiers on each one of the five-fold cross-validation runs. In the case of MLP and CNN, 20% of the 80% available data for training is used for validation in such a way that the MLP or CNN models are saved only if the computed loss improves for the validation data, as an extra measure to avoid overfitting the dataset. Furthermore, the classifiers were trained with the full training dataset and then used to classify the whole canopy on each image (containing many more pixels unseen by the classifiers) using majority voting, i.e., selecting the class most pixels are classified with.

3. Results

Figure 6 shows the classification performance using two classes (water stress or control) for the phenological stage tubers differentiation using overall accuracy, sensitivity, and specificity (see confusion matrices in the Appendix B), where the standard deviation of the mean is indicated for accuracy, sensitivity, and specificity, as error bars. As can be seen from these results RF and XGBoost achieve the best classification performance, being XGBoost the best.
Table 1 compares the classification performance using the best three classifiers found: RF, XGBoost, and CNN alone and using Majority Voting (MV). This table shows that both RF and XGBoost correctly classify all the images using majority voting, followed by CNN.
Figure 7 shows the classification performance for the tubers differentiation phenological stage and four classes: control and three levels of water stress: light, moderate, and severe (see confusion matrices in the Appendix B), where the standard deviation of the mean is indicated for accuracy, sensitivity, and specificity, as error bars. In this case, XGBoost performs best, followed by RF and MLP. Table 2 compares the classification performance of the three best classifiers: RF, XGBoost, and CNN alone and using MV. In this case, XGBoost performs best, followed by RF and CNN.
Figure 8 shows the classification performance at the maximum tuberization phenological stage using two classes: control and water stress (see confusion matrices in the Appendix B), where the standard deviation of the mean is indicated for accuracy, sensitivity, and specificity, as error bars. The best classifiers are XGBoost followed by RF and CNN. Table 3 compares the classification performance of RF, XGBoost, and CNN alone and using MV over all the images. This table shows RF and XGBoost both achieve perfect classification using MV of all the images taken at this phenological stage.
Figure 9 shows the classification performance at the maximum rate of tubers phenological stage using four classes: control, light, moderate, and severe water stress (see confusion matrices in the Appendix B), where the standard deviation of the mean is indicated for accuracy, sensitivity, and specificity, as error bars. Here, XGBoost obtains the best performance, followed by RF and CNN. As in the case of the two classes, the classification accuracies are good and allow estimation of the water stress from the first day. Table 4 compares the classification performance of RF, XGBoost, and CNN alone and using MV, where it can be noticed that XGBoost in combination with MV achieves perfect classification, followed by RF and CNN.
Figure 10 shows XGBoost classification results on some images of the tubers differentiation phenological stage using four classes. The color code here is green for no water stress, blue for light stress, yellow for moderate stress, and red for severe stress. Figure 10a shows the classification for a control plant (no water stress). Figure 10b shows a plant that suffered light stress. Figure 10c shows a plant that suffered moderate stress. Figure 10d shows a plant that suffered severe stress.
Figure 11 shows some XGBoost classification results for the maximum tuberization phenological stage using the same color code as in Figure 10.
Figure 12 shows the band importance for RF classification in the detection (two classes) and estimation (four classes) of water stress at the phenological stage of tubers differentiation. Figure 13 shows the same band importance for RF classification of two and four classes at the phenological stage of the maximum tuberization. As indicated in Figure 12 and Figure 13 the most important bands for classification in RF are the violet, the red edge, and a few wavelengths in the NIR.
Figure 14 shows the band importance for XGBoost classification in the detection (two classes) and estimation (four classes) of water stress at the phenological stage of tubers differentiation. Figure 15 shows the same band importance for XGBoost classification of two and four classes at the phenological stage of the maximum tuberization. From these figures, XGBoost considers important more bands than RF, i.e., it exploits better the spectral signature of the hyperspectral images. Band importance could help us identify which bands are better suited to detect water stress from multispectral imagery or to define water stress indices specially designed for potato crops.

4. Discussion

The results indicate that even using a small subset of pixels, taken at random from the hyperspectral images, it is possible to obtain good classification accuracies for detecting and estimating water stress in potato crops. The results also indicate that as early as one day after the onset of the stress in the tubers differentiation phenological stage and on the same day of the onset of the stress in the maximum tuberization water stress can be detected and measured. Other researchers like [33] also found that hyperspectral imaging could be useful to detect water supply conditions of leafy vegetables growing under greenhouse, using modified partial least square regression algorithm, trained to classify different levels of leaf water potential, obtaining a correlation coefficient of 0.826. In this sense, hyperspectral imaging could become a useful tool for the design of precision irrigation systems that allow optimizing the use of water in crops such as potatoes, although it is necessary to develop more studies in real conditions of commercial cultivation.
It was evident that over all classification tasks and phenological stages XGBoost provides excellent classification accuracies alone or in combination with majority voting, followed closely by random forest. Random forest and XGBoost also provide a direct measure of band importance to detect and estimate water stress. In this case, XGBoost seems to better use the whole spectral signature of the canopy, while RF uses a reduced subset of bands. Although the SVM algorithm did not show the best results in this study, the authors of [34] reported promising results when using this algorithm (R = 0.7684) in combination with the Kullback–Leibler divergence (KLD) dimensionality reduction method to select the most relevant bands of hyperspectral images, in the detection of moisture content in maize leaves at the seedling stage. For future experiments, it may be useful to evaluate some combinations of algorithms that have proven to be efficient in the detection of relative water content in leaves, from remote hyperspectral sensing, as reported by [35] who used artificial neural networks (ANN) after selecting the most important bands through partial least squares regression (PLSR), improving the performance of ANN alone.
CNN is a deep learning neural network algorithm that extracts features from images. However, despite being the deep learning neural network most used to analyze images [10], its classification performance was lower than RF and XGBoost, and only by using majority voting, it was possible to improve its performance to classify all image pixels. This is probably because CNN exploits the spatial structure of the images (such as edges) and not the spectral signature of the images. In this case, the canopy consists of mostly leaves with no spatial clues related to water stress.
Our results indicate that using machine learning and spectral images constitute a phenotyping tool useful to detect and estimate water stress in potato plants, which can also be used in processes of genetic improvement, by choosing those phenotypes that better resist water stress. The reflectance images obtained may be sensitive to the physiological and biochemical changes of the substances and pigments that are degraded and mobilized due to water stress.

5. Conclusions

This work shows that detection of water stress, as well as estimation of the water stress level, is possible with good accuracy incremented on the whole canopy, using majority voting at the tubers differentiation and maximum rate of tuberization phenological stages. In particular, the classification results are more accurate and available from the first day of stress for both the tubers differentiation and maximum rate of tuberization phenological stages. Extreme gradient boost performed best overall phenological stages and classification tasks, followed by random decision forests. XGBoost and RF also provide a measure of the importance of each band to detect or estimate water stress in potato crops. In the case of RF, these bands are the violet, red edge, and some specific NIR bands, while in the case of XGBoost it includes some additional bands in the visible (green, yellow, red) and NIR, exploiting better the spectral signature.
These results could lead to the use of more specific normalized water indexes for water stress detection and estimation in potato crops using these machine learning algorithms. However, they are not intended to be used by producers, since this research work was conducted under greenhouse conditions. In this sense, these results are an important basis for further research considering actual potato crop field conditions and cultural practices. It will allow to design advanced tools for early detection of water stress, increasing the efficiency in the application of irrigation.

Author Contributions

Conceptualization, E.A.S.-A. and G.A.G.-V.; Methodology, E.A.S.-A.; Software, J.M.D.-C.; Validation, E.A.S.-A.; Formal analysis, L.M.T.-D. and O.D.O.-P.; Investigation, J.M.D.-C. and E.A.S.-A.; Resources, A.M.C.-M.; Data curation, L.M.T.-D. and O.D.O.-P.; Writing—original draft, J.M.D.-C.; Writing—review & editing, E.A.S.-A., G.A.G.-V., L.M.T.-D., O.D.O.-P. and A.M.C.-M. Visualization, L.M.T.-D.; Supervision, A.M.C.-M.; Project administration, A.M.C.-M.; Funding acquisition, A.M.C.-M.; Resources, G.A.G.-V. and A.M.C.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science, Technology, and Innovation Fund of the General Royalty System, administered by the National Financing Fund for Science, Technology and Innovation Francisco José de Caldas, the Colombia BIO Program, the government of Cundinamarca—Colombia and the Ministry of Science, Technology, and Innovation (MINCIENCIAS), and Corporación Colombiana de Investigación Agropecuaria (AGROSAVIA).

Institutional Review Board Statement

Not applicable.

Acknowledgments

This work is part of a larger project in Agrosavia called Agroclimatic Information System for potato (Solanum tuberosum L.) crops within productive regions in Cundinamarca (SIAP in Spanish). We thank Jose Alfredo Molina Varón for his contribution with the experimental setup and the adaptation of measurement equipment, and Jhon Mauricio Estupiñán Casallas for his collaboration in the assembly of the irrigation systems.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Tubers differentiation stage: left and right photographs show the development of stolons: at the apex, “hook” and “matchstick” forms, these are morphological changes in the stolons in the tuber differentiation process.
Figure A1. Tubers differentiation stage: left and right photographs show the development of stolons: at the apex, “hook” and “matchstick” forms, these are morphological changes in the stolons in the tuber differentiation process.
Horticulturae 07 00176 g0a1
Figure A2. Maximum tuberization stage: left and right photographs show the development stage of flowering. Gómez et al. report that the stage of maximum tuberization and beginning of filling coincides with flowering.
Figure A2. Maximum tuberization stage: left and right photographs show the development stage of flowering. Gómez et al. report that the stage of maximum tuberization and beginning of filling coincides with flowering.
Horticulturae 07 00176 g0a2

Appendix B

Table A1. Four Classes, Tubers Differentiation.
Table A1. Four Classes, Tubers Differentiation.
RF
63,977.80246.00340.001436.20
6514.6010,013.6038.601433.20
4497.202.407114.20386.20
8758.40393.2089.8022,758.60
SVM
59,562.00538.80166.205733.00
13,379.401381.0037.803201.80
9828.6058.00223.201890.20
21,264.40388.0047.6010,300.00
CNN
52,467.804374.401091.408066.40
7442.607230.2079.403247.80
4198.003643.402123.402035.20
9497.202412.60592.4019,497.80
MLP
53,304.203697.40668.808329.60
10,808.204085.0080.003026.80
5405.002299.801266.803028.40
17,965.201482.40587.6011,964.80
XGBoost
329,137.0090.0069.00704.00
4314.0085,073.0014.00599.00
416.005.0059,409.00170.00
2869.0032.0025.00157,074.00
Ada Boost
51,866.402866.403189.608077.60
12,534.602287.20498.002680.20
7042.40114.803204.601638.20
18,669.001172.801264.0010,894.20
Table A2. Four Classes, Maximum Tuberization.
Table A2. Four Classes, Maximum Tuberization.
RF
56,866.2051.20995.0087.60
1690.8011,658.00618.0033.20
3996.60539.4019,275.80188.20
1678.602.401060.405258.60
SVM
52,848.201664.602907.80579.40
3598.008849.201474.2078.60
10,426.402783.609784.401005.60
2296.80165.602138.203399.40
CNN
50,919.002418.203942.80720.00
1010.408773.203995.00221.40
3454.80299.4018,737.601508.20
613.00188.002661.004538.00
MLP
56,069.80626.20690.80613.20
5170.007803.20942.6084.20
14,354.60992.607957.60695.20
3486.8020.60895.003597.60
XGBoost
289,738.00150.0091.0021.00
520.0069,382.0095.003.00
291.0026.00119,683.000.00
131.003.008.0039,858.00
Ada Boost
48,421.803685.404595.601297.20
2715.608295.002920.8068.60
6624.403534.2011,884.601956.80
1909.00104.802486.403499.80
Table A3. Two Classes, Tubers Differentiation.
Table A3. Two Classes, Tubers Differentiation.
RFCNNXGBoost
58,628.07372.047,915.218,084.8323,446.06554.0
9372.052,628.019,694.042,306.05475.0304,525.0
SVMMLPAda Boost
43,555.422,444.647,570.618,429.446,091.219,908.8
23,343.438,656.624,596.037,404.018,315.643,684.4
Table A4. Two Classes, Maximum Tuberization.
Table A4. Two Classes, Maximum Tuberization.
RFCNNXGBoost
54,926.803073.2052,422.005578.00288,103.001897.00
5220.2040,779.8010,624.4035,375.601363.00228,637.00
SVMMLPAda Boost
48,065.409934.6054,502.603497.4049,767.608232.40
8949.6037,050.4016,840.8029,159.206870.4039,129.60

References

  1. Center, I.P. Potato Facts and Figures. Available online: https://cipotato.org/potato/potato-facts-and-figures/ (accessed on 25 June 2021).
  2. van Loon, C.D. The effect of water stress on potato growth, development, and yield. Am. Potato J. 1981, 58, 51–69. [Google Scholar] [CrossRef]
  3. Trenberth, K.E.; Dai, A.; van der Schrier, G.; Jones, P.D.; Barichivich, J.; Briffa, K.R.; Sheffield, J. Global warming and changes in drought. Nat. Clim. Chang. 2014, 4, 17–22. [Google Scholar] [CrossRef]
  4. Romero, A.P.; Alarcón, A.; Valbuena, R.I.; Galeano, C.H. Physiological assessment of water stress in potato using spectral information. Front. Plant. Sci. 2017, 8. [Google Scholar] [CrossRef]
  5. Caturegli, L.; Matteoli, S.; Gaetani, M.; Grossi, N.; Magni, S.; Minelli, A.; Corsini, G.; Remorini, D.; Volterrani, M. Effects of water stress on spectral reflectance of bermudagrass. Sci. Rep. 2020, 10, 1–12. [Google Scholar] [CrossRef]
  6. Hoffmann, H.; Jensen, R.; Thomsen, A.; Nieto, H.; Rasmussen, J.; Friborg, T. Crop water stress maps for entire growing seasons from visible and thermal UAV imagery. Biogeosciences Discuss. 2016, 1–30. [Google Scholar] [CrossRef] [Green Version]
  7. Gago, J.; Douthe, C.; Coopman, R.E.; Gallego, P.P.; Ribas-Carbo, M.; Flexas, J.; Escalona, J.; Medrano, H. UAVs challenge to assess water stress for sustainable agriculture. Agric. Water Manag. 2015, 153, 9–19. [Google Scholar] [CrossRef]
  8. Labbé, S.; Lebourgeois, V.; Jolivot, A.; Marti, R. Thermal infra-red remote sensing for water stress estimation in agriculture. In Options Méditerranéennes; CIHEAM: Zaragosa, Spain, 2012; Volume 67, pp. 175–184. [Google Scholar]
  9. Gerhards, M.; Rock, G.; Schlerf, M.; Udelhoven, T. Water stress detection in potato plants using leaf temperature, emissivity, and reflectance. Int. J. Appl. Earth Obs. Geoinf. 2016, 53, 27–39. [Google Scholar] [CrossRef]
  10. Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef] [PubMed]
  11. Amatya, S.; Karkee, M.; Alva, A.K.; Larbi, P.; Adhikari, B. Hyperspectral imaging for detecting water stress in potatoes. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2012, ASABE 2012, Dallas, TX, USA, 29 July–1 August 2012; Volume 7, pp. 6134–6148. [Google Scholar] [CrossRef]
  12. Susič, N.; Žibrat, U.; Širca, S.; Strajnar, P.; Razinger, J.; Knapič, M.; Vončina, A.; Urek, G.; Gerič Stare, B. Discrimination between abiotic and biotic drought stress in tomatoes using hyperspectral imaging. Sens. Actuators B Chem. 2018, 273, 842–852. [Google Scholar] [CrossRef] [Green Version]
  13. Loggenberg, K.; Strever, A.; Greyling, B.; Poona, N. Modelling water stress in a Shiraz vineyard using hyperspectral imaging and machine learning. Remote Sens. 2018, 10, 202. [Google Scholar] [CrossRef] [Green Version]
  14. El-Shirbeny, M.A.; Abutaleb, K. Sentinel-1 Radar Data Assessment to Estimate Crop Water Stress. World J. Eng. Technol. 2017, 5, 47–55. [Google Scholar] [CrossRef] [Green Version]
  15. Van Emmerik, T.; Steele-Dunne, S.C.; Judge, J.; Van De Giesen, N. Dielectric Response of Corn Leaves to Water Stress. IEEE Geosci. Remote Sens. Lett. 2017, 14, 8–12. [Google Scholar] [CrossRef]
  16. Fariñas, M.D.; Jimenez-Carretero, D.; Sancho-Knapik, D.; Peguero-Pina, J.J.; Gil-Pelegrín, E.; Gómez Álvarez-Arenas, T. Instantaneous and non-destructive relative water content estimation from deep learning applied to resonant ultrasonic spectra of plant leaves. Plant. Methods 2019, 15, 1–10. [Google Scholar] [CrossRef]
  17. Gómez, M.I.; Magnitskiy, S.; Rodríguez, L.E. Normalized difference vegetation index, and K+ in stem sap of potato plants (Group Andigenum) as affected by fertilization. Exp. Agric. 2019, 55, 945–955. [Google Scholar] [CrossRef]
  18. Hsiao, T.C. Plant responses to sawdust. Proc. Indiana Acad. Sci. 1931, 41, 125–126. [Google Scholar]
  19. Tschaplinski, T.J.; Abraham, P.E.; Jawdy, S.S.; Gunter, L.E.; Martin, M.Z.; Engle, N.L.; Yang, X.; Tuskan, G.A. The nature of the progression of drought stress drives differential metabolomic responses in Populus deltoides. Ann. Bot. 2019, 124, 617–626. [Google Scholar] [CrossRef] [PubMed]
  20. Purdue, Multispec. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/ (accessed on 25 June 2021).
  21. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  22. Anaconda. Anaconda Individual Edition. Available online: https://www.anaconda.com/products/individual (accessed on 25 June 2021).
  23. Boggs, T. Spectral Python. Available online: http://www.spectralpython.net/ (accessed on 25 June 2021).
  24. Ho, T.K. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; Volume 1, pp. 278–282. [Google Scholar] [CrossRef]
  25. Wasserman, P.D.; Schwartz, T. Neural networks. II. What are they and why is everybody so interested in them now. IEEE Expert 1988, 3, 10–15. [Google Scholar] [CrossRef]
  26. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), Lile, France, 6–11 July 2015; Volume 1, pp. 448–456. [Google Scholar]
  27. Srivastava, M.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  28. Nwankpa, C.E.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
  29. Krizhevsky, A.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the ImageNet Large Scale Visual Recognition Challenge, Florence, Italy, 7 May 2012; p. 27. [Google Scholar]
  30. Cortez, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  31. Cheng, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the The Association for Computing Machinery’s Special Interest Group on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  32. Shen, Y.; Jiang, Y.; Liu, W.; Liu, Y. Multi-Class AdaBoost ELM; Springer Nature: Singapore, 2015; Volume 2, pp. 179–188. [Google Scholar] [CrossRef]
  33. Tung, K.C.; Tsai, C.Y.; Hsu, H.C.; Chang, Y.H.; Chang, C.H.; Chen, S. Evaluation of Water Potentials of Leafy Vegetables Using Hyperspectral Imaging. IFAC-PapersOnLine 2018, 51, 5–9. [Google Scholar] [CrossRef]
  34. Gao, Y.; Qiu, J.; Miao, Y.; Qiu, R.; Li, H.; Zhang, M. Prediction of Leaf Water Content in Maize Seedlings Based on Hyperspectral Information. IFAC-PapersOnLine 2019, 52, 263–269. [Google Scholar] [CrossRef]
  35. Krishna, G.; Sahoo, R.N.; Singh, P.; Bajpai, V.; Patra, H.; Kumar, S.; Dandapani, R.; Gupta, V.K.; Viswanathan, C.; Ahmad, T.; et al. Comparison of various modelling approaches for water deficit stress monitoring in rice crop through hyperspectral remote sensing. Agric. Water Manag. 2019, 213, 231–244. [Google Scholar] [CrossRef]
Figure 1. Hyperspectral image taken at 3 m above the canopy. False-color image showing red as band 90, green as band 60, and blue as band 40.
Figure 1. Hyperspectral image taken at 3 m above the canopy. False-color image showing red as band 90, green as band 60, and blue as band 40.
Horticulturae 07 00176 g001
Figure 2. Normalized sum of red, green, blue, NIR bands.
Figure 2. Normalized sum of red, green, blue, NIR bands.
Horticulturae 07 00176 g002
Figure 3. SAVI greater than 0.3 to detect leaves.
Figure 3. SAVI greater than 0.3 to detect leaves.
Horticulturae 07 00176 g003
Figure 4. Multi-layer perceptron layout.
Figure 4. Multi-layer perceptron layout.
Horticulturae 07 00176 g004
Figure 5. Convolutional neural network layout.
Figure 5. Convolutional neural network layout.
Horticulturae 07 00176 g005
Figure 6. Classification performance, tubers differentiation phenological stage using two classes.
Figure 6. Classification performance, tubers differentiation phenological stage using two classes.
Horticulturae 07 00176 g006
Figure 7. Classification performance, tubers differentiation phenological stage using four classes.
Figure 7. Classification performance, tubers differentiation phenological stage using four classes.
Horticulturae 07 00176 g007
Figure 8. Classification performance, maximum tuberization phenological stage using two classes.
Figure 8. Classification performance, maximum tuberization phenological stage using two classes.
Horticulturae 07 00176 g008
Figure 9. Classification performance, maximum tuberization phenological stage using four classes.
Figure 9. Classification performance, maximum tuberization phenological stage using four classes.
Horticulturae 07 00176 g009
Figure 10. XGBoost classification of (a) image of a control plant, (b) image of a plant with light stress, (c) image of a plant with moderate stress, (d) image of a plant with severe stress.
Figure 10. XGBoost classification of (a) image of a control plant, (b) image of a plant with light stress, (c) image of a plant with moderate stress, (d) image of a plant with severe stress.
Horticulturae 07 00176 g010
Figure 11. XGBoost classification of (a) control plant, (b) plant with light stress, (c) plant with moderate stress, (d) plant with severe stress.
Figure 11. XGBoost classification of (a) control plant, (b) plant with light stress, (c) plant with moderate stress, (d) plant with severe stress.
Horticulturae 07 00176 g011
Figure 12. Band importance determined by RF on the tubers differentiation phenological stage.
Figure 12. Band importance determined by RF on the tubers differentiation phenological stage.
Horticulturae 07 00176 g012
Figure 13. Band importance determined by RF on the maximum tuberization phenological stage.
Figure 13. Band importance determined by RF on the maximum tuberization phenological stage.
Horticulturae 07 00176 g013
Figure 14. Band importance determined by XGBoost on the tubers differentiation phenological stage.
Figure 14. Band importance determined by XGBoost on the tubers differentiation phenological stage.
Horticulturae 07 00176 g014
Figure 15. Band importance determined by XGBoost on the maximum tuberization phenological stage.
Figure 15. Band importance determined by XGBoost on the maximum tuberization phenological stage.
Horticulturae 07 00176 g015
Table 1. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
Table 1. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
RFRF + MVXGBoostXGBoost + MVCNNCNN + MV
Accuracy0.869187510.9812046910.693951560.875
Sensitivity0.8696562610.9811443410.717115940.875855327
Specificity0.8685708710.9812390510.693854010.875855327
Table 2. Comparison of classification performance of RF, MLP, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
Table 2. Comparison of classification performance of RF, MLP, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
RFRF + MVXGBoostXGBoost + MVCNNCNN + MV
Accuracy0.8114390630.906250.98545781310.635306250.703125
Sensitivity0.8791992690.9615384620.99120967810.5915876930.66889881
Specificity0.7074319920.8298611110.97862572610.4957251740.517834596
Table 3. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
Table 3. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for tubers differentiation phenological stage using two classes.
RFRF + MVXGboostXGBoost + MVCNNCNN + MV
Accuracy0.9202557710.9937307710.844207690.980769231
Sensitivity0.9215659610.9935314210.85559230.979166667
Specificity0.9167655910.9937662710.836431180.982758621
Table 4. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for the maximum tuberization phenological stage using four classes.
Table 4. Comparison of classification performance of RF, XGBoost, and CNN alone and using MV for the maximum tuberization phenological stage using four classes.
RFRF + MVXGBoostXGBoost + MVCNNCNN + MV
Accuracy0.8947942310.9807692310.99742510.7977673080.961538462
Sensitivity0.9149047610.9916666670.99799149610.7753001340.964285714
Specificity0.8184123360.9642857140.99601907810.7131385670.982758621
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duarte-Carvajalino, J.M.; Silva-Arero, E.A.; Góez-Vinasco, G.A.; Torres-Delgado, L.M.; Ocampo-Paez, O.D.; Castaño-Marín, A.M. Estimation of Water Stress in Potato Plants Using Hyperspectral Imagery and Machine Learning Algorithms. Horticulturae 2021, 7, 176. https://doi.org/10.3390/horticulturae7070176

AMA Style

Duarte-Carvajalino JM, Silva-Arero EA, Góez-Vinasco GA, Torres-Delgado LM, Ocampo-Paez OD, Castaño-Marín AM. Estimation of Water Stress in Potato Plants Using Hyperspectral Imagery and Machine Learning Algorithms. Horticulturae. 2021; 7(7):176. https://doi.org/10.3390/horticulturae7070176

Chicago/Turabian Style

Duarte-Carvajalino, Julio Martin, Elías Alexander Silva-Arero, Gerardo Antonio Góez-Vinasco, Laura Marcela Torres-Delgado, Oscar Dubán Ocampo-Paez, and Angela María Castaño-Marín. 2021. "Estimation of Water Stress in Potato Plants Using Hyperspectral Imagery and Machine Learning Algorithms" Horticulturae 7, no. 7: 176. https://doi.org/10.3390/horticulturae7070176

APA Style

Duarte-Carvajalino, J. M., Silva-Arero, E. A., Góez-Vinasco, G. A., Torres-Delgado, L. M., Ocampo-Paez, O. D., & Castaño-Marín, A. M. (2021). Estimation of Water Stress in Potato Plants Using Hyperspectral Imagery and Machine Learning Algorithms. Horticulturae, 7(7), 176. https://doi.org/10.3390/horticulturae7070176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop