Next Article in Journal
Adaptive Optical Closed-Loop Control Based on the Single-Dimensional Perturbation Descent Algorithm
Previous Article in Journal
Ultra-Short-Term Wind Power Forecasting Based on CGAN-CNN-LSTM Model Supported by Lidar
Previous Article in Special Issue
Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures

by
Víctor Díaz-Martínez
1,
Jairo Orozco-Sandoval
1,
Vidya Manian
1,*,
Balpreet K. Dhatt
2 and
Harkamal Walia
2
1
University of Puerto Rico, Mayagüez, PR 00681, USA
2
University of Nebraska-Lincoln, Lincoln, NE 68588, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4370; https://doi.org/10.3390/s23094370
Submission received: 24 March 2023 / Revised: 26 April 2023 / Accepted: 26 April 2023 / Published: 28 April 2023

Abstract

:
A framework combining two powerful tools of hyperspectral imaging and deep learning for the processing and classification of hyperspectral images (HSI) of rice seeds is presented. A seed-based approach that trains a three-dimensional convolutional neural network (3D-CNN) using the full seed spectral hypercube for classifying the seed images from high day and high night temperatures, both including a control group, is developed. A pixel-based seed classification approach is implemented using a deep neural network (DNN). The seed and pixel-based deep learning architectures are validated and tested using hyperspectral images from five different rice seed treatments with six different high temperature exposure durations during day, night, and both day and night. A stand-alone application with Graphical User Interfaces (GUI) for calibrating, preprocessing, and classification of hyperspectral rice seed images is presented. The software application can be used for training two deep learning architectures for the classification of any type of hyperspectral seed images. The average overall classification accuracy of 91.33% and 89.50% is obtained for seed-based classification using 3D-CNN for five different treatments at each exposure duration and six different high temperature exposure durations for each treatment, respectively. The DNN gives an average accuracy of 94.83% and 91% for five different treatments at each exposure duration and six different high temperature exposure durations for each treatment, respectively. The accuracies obtained are higher than those presented in the literature for hyperspectral rice seed image classification. The HSI analysis presented here is on the Kitaake cultivar, which can be extended to study the temperature tolerance of other rice cultivars.

1. Introduction

Rice is a food staple for more than 3.5 billion people around the world, particularly in Asia, Latin America, and parts of Africa. According to the International Grains Council (IGC), the total volume of milled rice produced worldwide reached 504 million metric tons in the 2021/2022 crop year. There is a demand for high-quality rice which is increasing worldwide. Rice quality-related genes and their role in regulating structure and components for improving quality of rice are being studied [1]. However, the global temperature is predicted to rise by 0.3 to 4.8 deg C. This supra-optimal temperature cause changes in the plants in the morphological, physiological, biochemical, as well as gene level alterations. High temperature stress affects rice seed growth in several ways. It results in reduced grain quality and crop yield. It affects the grain size, color, and phenotypic characteristics. Variation in grain width and height for mature grain under high night temperature stress is reported in [2]. Phytohormones such as auxin and cytokinin are affected by high temperatures affecting the starch filling in the grain adversely and causing chalkiness, changing low quality grains and lowering the market value.
Global mean day and night temperatures are rising throughout the world, affecting cereal and grain cultivars. Hence, it becomes increasingly important to develop an automatic mechanism to monitor the quality of grains and cereals grown under high temperatures for market acceptance as well as to study its implications on consumer health. It is essential to utilize imaging technology and to analyze the resultant images to determine phenotypic variations in grains and crops that feed the communities.
Hyperspectral images (HSIs) record the spectral signature of materials imaged in highly sampled wavelengths called spectral bands. Each material has a unique spectral signature and can be used to distinguish it from other materials. Additionally, under environmental stressors, the spectral signatures of naturally occurring materials, food crops, water bodies, and minerals can be identified. Imaging techniques have been used for crop quality assessment and prediction. Deep Learning (DL) has become a fundamental branch of Artificial Intelligence (AI), due to its ability to discover patterns by mimicking the functioning of neurons in the human brain. The Deep Neural Networks (DNNs) have exceptional learning capability, and as a result, it has been used in different areas of Machine Learning (ML) and AI. The DNNs have been used in many applications such as health, signal processing, data analysis, and context of computing among others [3].
DL methods are also being applied to classify crop seed images. RGB images of maize seeds have been classified in [4] using a deep learning architecture. The size, shape and color of rice is used to identify weedy rice that are a problem to regular rice varieties. An RGB-based image analysis and machine learning approach is presented in [5] to identify weedy rice from regular rice seeds. An overview of spectroscopic imaging configuration, key wavelengths, spectral, and spatial resolutions for multispectral imaging for seed phenotyping and quality monitoring is discussed in [6]. In order to study the phenotypic variation in rice seeds due to environmental stressors, RGB color images and hyperspectral images are acquired and analyzed [7]. Four varieties of rice seeds have been classified using hyperspectral images in [8]. A comparison of milled and brown rice samples is carried out using NIR-HSI to detect rice varieties [9]. Another study reported corn seed viability using hyperspectral imaging. A wavelength range of 900–1700 nm is employed to obtain spectral images of three different varieties of naturally aged watermelon seed samples and to detect the viability of seeds using partial least square discriminant analysis (PLS-DA) model. Germination prediction of beet seeds using HSIs and machine learning approaches such as support vector machines, random forest, and gradient boosting classifiers is presented in [10].
Regularly grown rice seed varieties have been classified using Convolutional Neural Networks (CNNs) in [11,12]. All crop seeds have starch, fat, and enzymes and hence their spectral signatures overlap. A common drawback in a hyperspectral image-based classification of different crop seeds is the limited number of samples available for training a deep learning neural network. To overcome this limitation, a deep transfer learning neural network model is used in [13] for training a deep learning network to classify a particular variety of seed, and to reuse the network to classify other types of crop seeds. Most of the above publications are on crop seed growth under normal conditions. Due to the changing climate on the planet, spectroscopy is beginning to be used to understand the effects of high temperatures on grain quality. HSIs of rice seeds are from two classes: control and high day and night temperatures, which are classified using a 3 Dimensional Convolutional Neural Network (3D-CNN) architecture in [14]. Currently, there is no study evaluating the imaging spectroscopy for distinguishing seeds grown under different hours of exposure to higher day and/or night temperatures. The main contributions of this paper are the following:
(a)
Classification of HSIs of rice seeds grown under different exposure durations to different high day and/or night temperature treatments using DL architectures;
(b)
A DL framework for a comprehensive analysis of hyperspectral images of seeds. We present results from rice seeds grown under High Night Temperature (HNT), High Day and High Night Temperature (HDNT) stressors, and control environments. Our framework includes options for calibration, preprocessing, and segmentation for HSI seed image extraction from a panel image of seeds, spectral analysis, spatial–spectral feature extraction, as well as classification using two different DL neural network architectures: 3D-CNN and Deep Neural Network (DNN);
(c)
A software application of the DL seed image processing framework is the first of its kind for processing crop seed HSIs.
The rest of the paper is organized as follows. Section 2 presents the materials and methods that describe the framework with various Graphical User Interfaces, Section 3 presents the results of rice seed HSI classification using both 3D-CNN and DNN, Section 4 presents a discussion, and Section 5 the conclusions.

2. Materials and Methods

This section describes the high temperature treatments that the rice plants have gone through. The rice (Oryza sativa) variety Kitaake (a temperate japonica cultivar) is used in this study. This section also presents the rice seed HSI calibration and visualization tools, and the deep learning architectures for the analysis and classification of the images. Rice plants are moved to the high-temperature area after fertilization. The temperatures for the four different treatments are shown in Figure 1. This figure also shows Auxin quantification in pmoles/g for the four different treatments and the control group during which the temperatures are normal. The new HDNT is referred to as HDNT2 treatment and is included in the image analysis. The Auxin level increases as the temperature increases. In order to identify phenotypic changes as temperature increases, we analyze the hyperspectral images of sample seeds from different hours of exposure to high temperatures. The images of the seeds are acquired at 168, 180, 204, 216, 228, and 240 h of exposure to high temperatures after the fertilization stage of the plants. Hence, the images analyzed are from five different treatments including the control, and from six different hours of exposure to high temperatures.

2.1. Hyperspectral Rice Seed Datasets

The hyperspectral images of rice seeds grown under high day/night temperature environments and the control environment are taken with a high-performance line-scan image spectrograph (Micro-Hyperspec Imaging Sensors, Extended VNIR version) [1]. This sensor covers the spectral range from 600 to 1700 nm. A hyperspectral camera that records the spectrum from 597.21 nm to 1703.93 nm with an inter-band sampling of 4.14 nm is used to record the rice seed images. Each image has 268 bands. The images are recorded as hypercubes of the dimension 1250 × 450 × 268. Hypercubes are recorded for rice seeds grown in control, and high day and night temperature. Table 1 gives the details of the temperature treatments of the rice seeds and number of seed images acquired in each category.

2.2. Calibration and Segmentation of Rice Seed HSI

We present the processing and DL classification framework of HSIs of rice seeds using Graphical User Interfaces (GUIs). Figure 2 presents the GUI for HSI rice seed calibration and segmentation. The calibration is conducted using the Shafer model [15] given by the equation,
I c   =   I I d I w I d
where Ic is the calibrated constant reflection value at a predetermined wavelength, I is the original spectra in the form of intensity, Iw is the white reference image captured from a white Teflon tile with 100% reflection, and Id is the dark reference image with 0% reflection obtained with the light source turned off and the camera lens covered with a cap. The module 1 in the GUI presents the dimension of the image hypercube that includes the number of rows, number of columns, and number of bands. Module 2 is used for calibration of the image. In this sub-window, the dark and white reference images are uploaded, a button is provided for calibration, and another for saving the calibrated image. In module 3, the user can select the red, green, and blue bands as any three out of the 268 bands for displaying the image as an RGB color composite. Module 4 is for segmenting the seeds from the background, which opens a new interface shown in Figure 3 for extracting each seed and saving the image. The "Find" button is used to select the path in which the individual rice seed images are to be saved. In this case, there are four different datasets of rice seed high temperature treatments, and six subclasses of high temperature exposure duration for each of the four treatment class. There is one control class of HSIs for normal temperature grown rice seeds. The individual seed images are saved in a folder for training the 3D-CNN architecture. The tools button in module 4 in Figure 2 opens another interface shown in Figure 4. In the left panel of this module, each individual band can be visualized as an image. In the select image panel, an image from a temperature treatment and a subclass of the number of hours of exposure and band number can be input for the image to be displayed in the bottom left panel. For example, band number 75 from a HDNT2 rice seed HSI, exposed to a high day and night temperature for 204 h is shown on the left bottom side of the panel. By clicking on the spectrum button, individual pixel spectra of several pixels are plotted next to the image. On the rightmost bottom panel, the mean spectrum is plotted. The right top panel displays measurements of spatial–spectral phenotype characteristics of the seed. Furthermore, the spectrum button in Figure 4 enables the user to point the cursor at a pixel location to see the spectra of that pixel on the right side of this interface. The feature extraction button is used to calculate shape features of the seed image displayed in a panel in the GUI. In order to extract phenotype characteristics, the following features are extracted from the rice seed images [16].

2.3. Hyperspectral Seed Graphical User Interfaces

Four Graphical User Interfaces (GUIs) have been developed for the extraction of rice seed images and for feature extraction. Figure 3 shows the GUI for individual seed image extraction from a panel image containing several seeds. The user can input the path where the images have to be saved, as well as the category or class and subclass. The button for feature extraction is shown in the GUI in Figure 4. The features are described below:
If the boundary of the rice seed is represented by a polygonal curve, the diameter of a boundary is defined by
d i a m e t e r B = max D p i , p j
where D is a distance measure and p i and p j are points on the boundary. The value of the diameter and the orientation of a line segment connecting the two extreme points that comprise the diameter is called the major axis. The line perpendicular to the major axis is called the minor axis. The ratio of the major axis to the minor axis is called the eccentricity given by
e c c e n t r i c i t y = 1 b / a 2 , a b  
where a is half the length of the major axis and b is half the length of the minor axis. The surface area of the rice seed is the area of an ellipse given by
a r e a = π a b  
The perimeter of an ellipse is given by
p = π a + b  
A more frequently used feature descriptor for shape is compactness given by
c o m p a c t n e s s = p 2 / A  
where A is the area. The roundness or circularity is a dimensionless measure given by
r o u n d n e s s = 4 π A p 2  
The mean intensity of the pixels in the seeds is given by
X   =   1 N i = 1 N x i  
Some of these features are shown in the top panel of the GUI in Figure 4. The selection of the classification button opens a fifth GUI shown in Figure 5. This GUI implements two DL methods for seed image classification. The first is the 3D-CNN which is an image-based classification method and the second is a DNN which is a pixel based classification method.
Figure 5 shows the GUI for loading the training HSI rice seed images from the train folder, and for loading the testing HSI rice seed images from the test folder. The link to download the codes for the GUI is provided in the Supplementary Materials.

2.4. 3D-Convolutional Neural Network

The architecture of a 3D-CNN consists of 3D convolutional layers, 3D pooling layers, and fully connected layers. Each convolutional layer in a 3D-CNN applies multiple 3D filters to the input data. The result of each convolution is passed through a non-linear activation ReLu function to introduce non-linearities into the model. The resulting data is then subjected to 3D pooling, using a 3D pooling operation to reduce the dimensionality of the feature volume. Finally, the output of the last pooling layer is fed into one or more fully connected layers to produce the final output of the model.
3D convolutions are performed similar to 2D convolutions in a CNN. However, in a 3D-CNN, the convolutional kernel has three dimensions instead of two. We have a 3D input I of size C × D × H × W , where C is the number of channels, D is the depth of the image, H is the height of the image, and W is the width of the image. To perform a 3D convolution on I, a 3D kernel K of size C × D × H × W is used, where C’ is the number of output channels, D’ is the depth of the kernel, H’ is the height of the kernel, and W’ is the width of the kernel. The 3D convolution operation is defined as the following:
O i , j , k = d = 0 D 1 h = 0 H 1 w = 0 W 1 c = 0 C 1 c = 0 C 1 I i + d ,     j + h , k + w , c K d , h , w , c , c
where O is the output of the convolution at position (i, j, k), I is the input volume, K is the kernel volume, D’ is the depth of the kernel, H’ is the height of the kernel, W’ is the width of the kernel, C is the number of input channels, C’ is the number of output channels, and d, h, w, c, and c’ are indices that iterate over the kernel and input dimensions. The 3D pooling operation is performed similarly to the 2D pooling operation in a CNN. The goal of 3D pooling is to reduce the dimensionality of the feature volume by subsampling the features in all three dimensions.

2.5. 3D-CNN Training and Validations

Eighty percent of images are used for training the 3D-CNN, ten percent for validation, and ten percent for testing. The whole hypercube image of each seed is used to train the network. Since the imaging systems acquire lesser images, data augmentation is used to generate more images for training the 3D-CNN architecture. With data augmentation, the total number of images used for each class is 40 images. The workflow for the 3D-CNN-based classification is shown in Figure 6. The architecture consists of a 3D convolution layer with the Rectified Linear Unit (ReLu) and Max Pool operations. The output from the second 3D convolution layer is flattened and input to a fully connected layer with Softmax operation. The output of this layer is the classification of the seed image into one of the trained categories.

2.6. Deep Neural Network

The input of the DNN are the pixel vectors X of size 268 corresponding to the number of bands. The pixels are arranged in a matrix of dimension (numbers of pixels x numbers of bands). Each row of pixels is input to the DNN with its corresponding initial weights. The following notation is used for the DNN:
Data = X i j l
Weights = W i j l
where i corresponds to the index pixel (row), j corresponds to the values of the vector pixel and l corresponds to the layer.
The process that occurs in each hidden layer is defined by the following operation. Taking for example the first pixel (neuron) in the first hidden layer:
= X 11 1 * W 11 1 + X 12 1 * W 12 1 + .   .   . + X 1 n 1 * W 1 n 1
In vector form, it is the following:
= X 1 1 . W 1 1
where
  X 1 1 = X 11 1 , X 12 1 , , X 1 n 1   and   W 1 1 = W 11 1 , W 12 1 , , W 1 n 1  
Adding a bias vector, the following general equation for the DNN is obtained:
Z = X . W + b
where b is the bias tensor.
The next step is to pass Z through a non-linear activation function. For the DNN proposed in this work, the Rectified Linear Unit activation function (ReLu) is used:
y = ReLu Z  
Once the DNN is defined, we proceed to the training process using a lost function which is categorical cross entropy and the optimizer is the Adam optimizer.

2.7. DNN Training and Validation

The architecture for the pixel-based classification scheme using DNN is given in Figure 7. The input data that is provided to the DNN at the input layer is the pixel spectral vector; in this case, each pixel vector has a dimension of 268 (number of band). The DNN consists of one input layer, 12 hidden layers, and finally an output layer using Softmax operation as shown in Figure 7. For the classification of rice images using DNN, 70% of the pixels of each rice seed from the treatments is used to train the model. The remaining 30% of the pixels is used for validation. Both DL methods are used for classifying two types of classification experiments. The first one is the classification of rice seeds or pixels of rice seeds. The results are evaluated using the Average Accuracy (AA), Overall Accuracy (OA), and Kappa score for the 3D-CNN, and the precision, recall, and f1-score metrics for the DNN. These performance metrics are shown in Table 2 In Table 2, TP stands for true positive, FP for false positive, TN for true negative, and FN for false negative. P and N are the total number of positive and negative samples for each class, respectively. The Kappa score tests the inter-reliability of the results, i.e., how much of the accuracy is obtained by chance. Po is the proportion of observed agreement, and Pe is the proportion of agreement expected by chance. Precision score measures the proportion of true positive samples among all samples that have been predicted as positive by the classifier. Precision is a useful metric to assess the importance of classification models and is used with recall and f1-score. Recall is the proportion of true positive samples among all actual positive samples in the dataset. A higher recall indicates that the model is better at identifying positive samples. A higher f1-score indicates better overall performance, considering both precision and recall.
The Python codes for the DL framework for hyperspectral seed image calibration, preprocessing, segmentation, and DL neural network classification are made in a Github link provided in the Supplementary Materials.

3. Results

This section presents the results using the 3D-CNN architecture and the DNN architecture for classification. About 30 images from each category of rice seed HSI are used in the training, testing, and validation phases of the 3D-CNN architecture. All the rice seed images are extracted from a panel of seeds using the GUI presented in Figure 2 and Figure 3. The GUI in Figure 4 is used to examine the spectra of the HSI seed images. GUI in Figure 5 is used to upload the training and testing images for each experiment. Two types of experiments are conducted. The first is to classify the rice seeds from the five different treatments for each hour of exposure. The second is to classify the rice seeds for different hours of exposure under each treatment class. The classification results of the first experiment are given in Table 3, and the classification results for the second experiment are given in Table 4, respectively.
Table 5 shows the classification results for the different temperature treatments of high day/night temperature for each exposure time using the DNN architecture. The classification maps for the different temperature treatments (classes) for each exposure time are shown in Figure 8, and the ground truths are shown in Figure 9, respectively. Table 6 gives the classification accuracies using the DNN for six different exposure times for each high day/night temperature treatment class. Figure 10 gives the classification map obtained from the DNN architecture for different exposure times, and Figure 11 gives the ground truth for this experiment, respectively.

4. Discussion

The accuracies are ordered in rows for the 3D-CNN in Table 3 and Table 4, and the accuracies are ordered in columns in the case of DNN in Table 5 and Table 6. For the DNN architecture the precision, recall, and f1-scores are also calculated, and hence they had to be ordered in columns. The average overall classification accuracy from Table 3 for classification of seeds from different temperature treatments using the 3D-CNN architecture is 91.33% which shows that the 3D-CNN performs well in discriminating between the seeds from different treatments for each exposure time. The average overall classification accuracy from Table 4 for different exposure times of rice seeds for all the treatments is 89.50%.
The seeds from HDNT1 and HDNT2 treatments have lower accuracies for different exposure times. The average accuracy from Table 5 for different temperature treatments of rice seeds using the DNN architecture is 94.3%, which shows that the DNN performs well in discriminating between the seeds from different treatments for each exposure time. The precision scores are interpreted along with recall, which from Table 5 and Table 6 range between 0.86 and 1.0, showing that the classification model is good at identifying positive samples, and at the same time performs well in not identifying samples wrongly from another class; in other words, it has low false-positive errors. The f1-score is a combination of precision and recall, and ranges between 0.86 and 1.0, which shows a better overall performance of the DNN model. The lowest macro average and weighted average of precision, recall, and f1-score obtained are for the exposure times of 204 and 240 h. For 204 h of exposure time, the HDNT1 obtained the lowest accuracy values and for 240 h of exposure time, the treatments HDNT1 and HDNT2 obtained the lowest accuracy. Observing Figure 10, it can be seen that there are misclassifications in the rice seeds of the HDNT2 class into HDNT1 class and vice versa, which means that for these two exposure times these rice classes have similar pixel spectral characteristics.
A t-Stochastic Nearest Neighbor Embedding (t-SNE) of the rice seed hyperspectral vectors have been performed and displayed in Figure 12. The t-SNE plots are made after taking the first 50 bands of the Principal Components Analysis (PCA) decomposition and performing a t-SNE visualization using the t-SNE1 (x-axis) and t-SNE2 (y-axis) components. These plots show that HSI of rice seeds grown under high temperatures have useful information in the different bands for distinguishing between the seeds grown under different hours of exposure to high temperatures. Most of the seed hyperspectral vectors from the five high temperature treatments are uncorrelated with each other as seen by the clustering of the colors. Hence, imaging spectroscopy is a promising tool for studying the impact of warming temperatures in the planet where seed crops are grown, and to study the phenotypic spatial–spectral changes in the seeds due to increase in day and night temperatures.
In Figure 12, the light green points are the HDNT1 pixels and dark green points are the HDNT2 rice seed pixels, respectively, which show some correlation. These are the two classes of high temperature grown seeds that have similar values in spectral bands obtaining a lower accuracy compared to the other high temperature grown rice seeds. The classification accuracies using the DNN are 1% to 4% higher than that of the 3D-CNN, because the DNN performs a pixel-based classification, and there are more pixel samples to train the DNN. The 3D-CNN uses whole seed images for training the network. Whole seed HSI acquisition is costly and time consuming. However, the 3D-CNN also has performed well in distinguishing between the rice seeds from different temperature treatments and high temperature exposure durations.

Comparison with State-of-the-Art Rice Seed HSI Classification

There are many publications on regular PNG or RGB rice seed variety classifications in the literature using machine learning and deep learning methods [17,18,19,20]. Table 7 summarizes the rice seed HSI classification. The work by Gao et al. [14] is the only one that classifies HSI rice seeds grown under high temperatures using 3D-CNN. Moreover, they classify only one temperature class against control totaling only two categories. Whereas, in this paper we have used 3D-CNN to classify rice seed HSI from four different temperature classes against the control and also to distinguish seeds exposed to different durations of high temperatures.
There are also no studies on the State-Of-the-Art (SOA) on DNN for rice seed HSI pixel-based classification. The methods listed in Table 7 mostly use CNN but not 3D-CNN. The 3D-CNN architectures, as well as the DNN architecture presented in this framework, are able to perform a seed-based as well as a pixel-based classification of rice seeds grown under high day and/or night temperatures, obtaining higher accuracies than the SOA with a higher number of temperature treatment classes.
The DL framework for the processing and classification of rice seeds grown under high temperature can be extended to other varieties of rice seeds and grains to better understand the effects of warming temperatures on seed crops. The spectral phenotypic changes can be correlated with gene studies as well as evaluate the nutrition content of the seeds grown under higher temperatures. The DL software application runs without the installation of Python software, and can be used for calibration, processing, segmentation, feature extraction, and classification of other types of hyperspectral images of seeds such as wheat, maize, corn, watermelon, beet, and soybean.

5. Conclusions

A DL framework for the calibration, preprocessing, segmentation, and classification of rice seed hyperspectral images has been presented. The 3D-CNN architecture performs well in classifying the rice seeds from different temperature treatments, as well as classifying the seeds from subclasses of six temperature exposure durations. The DNN architecture uses the spectral information in each pixel of the rice seed images and performs a pixel-based classification obtaining a higher accuracy for all the four temperature treatments as well as for sub classes of varying temperature exposure durations. The rice seed hyperspectral images from the highest day and night temperature of 36/32 degree Celsius gives the lowest accuracy, showing that higher temperatures alter the seed spectral–spatial characteristics drastically making it non-discriminable from seeds from the other treatment classes. The HSI and DL methods presented here can be applied in precision agriculture to study the impact of environmental variables on grain varieties.

Supplementary Materials

The codes for the DL framework for hyperspectral seed image calibration, preprocessing, segmentation, and classification are available at: https://gitfront.io/r/vido6/vC64GLsxCDZx/classificationRice/, accessed on 23 March 2023.

Author Contributions

Conceptualization, V.D.-M., J.O.-S. and V.M.; methodology, V.D.-M., J.O.-S. and V.M.; software, V.D.-M. and J.O.-S.; validation, V.D.-M. and J.O.-S.; formal analysis, V.D.-M. and J.O.-S.; investigation, V.M., V.D.-M. and J.O.-S.; resources, V.M. and H.W.; data curation, B.K.D., V.D.-M. and J.O.-S.; writing—original draft preparation, V.M., V.D.-M. and J.O.-S.; writing—review and editing, V.M.; visualization, V.D.-M. and J.O.-S.; supervision, V.M. and H.W.; project administration, V.M. and H.W.; funding acquisition, V.M. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Foundation (NSF), grant number 1736192.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, P.; Chen, Y.H.; Lu, J.; Zhang, C.Q.; Liu, Q.Q.; Li, Q.F. Genes and Their Molecular Functions Determining Seed Structure, Components, and Quality of Rice. Rice 2022, 15, 18. [Google Scholar] [CrossRef] [PubMed]
  2. Dhatt, B.K.; Abshire, N.; Paul, P.; Hasanthika, K.; Sandhu, J.; Zhang, Q.; Obata, T.; Walia, H. Metabolic Dynamics of Developing Rice Seeds Under High Night-Time Temperature Stress. Front. Plant Sci. 2019, 10, 1443. [Google Scholar] [CrossRef] [PubMed]
  3. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
  4. Xu, P.; Tan, Q.; Zhang, Y.; Zha, X.; Yang, S.; Yang, R. Research on Maize Seed Classification and Recognition Based on Machine Vision and Deep Learning. Agriculture 2022, 12, 232. [Google Scholar] [CrossRef]
  5. Ruslan, R.; Khairunniza-Bejo, S.; Jahari, M.; Ibrahim, M.F. Weedy Rice Classification Using Image Processing and a Machine Learning Approach. Agriculture 2022, 12, 645. [Google Scholar] [CrossRef]
  6. Elmasry, G.; Mandour, N.; Al-Rejaie, S.; Belin, E.; Rousseau, D. Recent applications of multispectral imaging in seed phenotyping and quality monitoring—An overview. Sensors 2019, 19, 1090. [Google Scholar] [CrossRef] [PubMed]
  7. Fabiyi, S.D.; Vu, H.; Tachtatzis, C.; Murray, P.; Harle, D.; Dao, T.K.; Andonovic, I.; Ren, J.; Marshall, S. Varietal Classification of Rice Seeds Using RGB and Hyperspectral Images. IEEE Access 2020, 8, 22493–22505. [Google Scholar] [CrossRef]
  8. Qiu, Z.; Chen, J.; Zhao, Y.; Zhu, S.; He, Y.; Zhang, C. Variety identification of single rice seed using hyperspectral imaging combined with convolutional neural network. Appl. Sci. 2018, 8, 212. [Google Scholar] [CrossRef]
  9. Onmankhong, J.; Ma, T.; Inagaki, T.; Sirisomboon, P.; Tsuchikawa, S. Cognitive spectroscopy for the classification of rice varieties: A comparison of machine learning and deep learning approaches in analysing long-wave near-infrared hyperspectral images of brown and milled samples. Infrared Phys. Technol. 2022, 123, 104100. [Google Scholar] [CrossRef]
  10. Zhou, S.; Sun, L.; Xing, W.; Feng, G.; Ji, Y.; Yang, J.; Liu, S. Hyperspectral imaging of beet seed germination prediction. Infrared Phys. Technol. 2020, 108, 103363. [Google Scholar] [CrossRef]
  11. Chatnuntawech, I.; Tantisantisom, K.; Khanchaitit, P.; Boonkoom, T.; Bilgic, B.; Chuangsuwanich, E. Rice Classification Using Spatio-Spectral Deep Convolutional Neural Network. arXiv 2018, arXiv:1805.11491. [Google Scholar]
  12. Dar, R.A.; Bhat, D.; Assad, A.; Islam, Z.U.; Gulzar, W.; Yaseen, A. Classification of Rice Grain Varieties Using Deep Convolutional Neural Network Architectures. SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
  13. Wu, N.; Liu, F.; Meng, F.; Li, M.; Zhang, C.; He, Y. Rapid and Accurate Varieties Classification of Different Crop Seeds Under Sample-Limited Condition Based on Hyperspectral Imaging and Deep Transfer Learning. Front. Bioeng. Biotechnol. 2021, 9, 696292. [Google Scholar] [CrossRef]
  14. Gao, T.; Chandran, A.K.N.; Paul, P.; Walia, H.; Yu, H. HyperSeed: An End-to-End Method to Process Hyperspectral Images of Seeds. Sensors 2021, 21, 8184. [Google Scholar] [CrossRef]
  15. Polder, G.; Van Der Heijden, G.W.; Keizer, L.P.; Young, I.T. Calibration and Characterisation of Imaging Spectrographs. J. Near Infrared Spectrosc. 2003, 11, 193–210. [Google Scholar] [CrossRef]
  16. Woods, R.E.; Gonzalez, R.C. Digital Image Processing; Pearson: London, UK, 2019. [Google Scholar]
  17. Satoto, B.D.; Anamisa, D.R.; Yusuf, M.; Sophan, M.K.; Khairunnisa, S.O.; Irmawati, B. Rice seed classification using machine learning and deep learning. In Proceedings of the 2022 7th International Conference Informatics Computer ICIC 2022, Denpasar, Indonesia, 8–9 December 2022. [Google Scholar] [CrossRef]
  18. Aukkapinyo, K.; Sawangwong, S.; Pooyoi, P.; Kusakunniran, W. Localization and Classification of Rice-grain Images Using Region Proposals-based Convolutional Neural Network. Int. J. Autom. Comput. 2020, 17, 233–246. [Google Scholar] [CrossRef]
  19. Panmuang, M.; Rodmorn, C.; Pinitkan, S. Image Processing for Classification of Rice Varieties with Deep Convolutional Neural Networks. In Proceedings of the 16th International Symposium on Artificial Intelligence and Natural Language Language Processing, iSAI-NLP 2021, Ayutthaya, Thailand, 21–23 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  20. Tuğrul, B. Classification of Five Different Rice Seeds Grown. Commun. Fac. Sci. Univ. Ank. Ser. A2-A3 Phys. Sci. Eng. 2022, 64, 40–50. [Google Scholar] [CrossRef]
Figure 1. Auxin quantification at different hours of exposure of rice seeds to high temperatures.
Figure 1. Auxin quantification at different hours of exposure of rice seeds to high temperatures.
Sensors 23 04370 g001
Figure 2. GUI for hyperspectral seeds image calibration and segmentation. There are 4 modules with functions: load image, save calibrated image, display RGB image and segment image. The seeds are visible in the image panels in modules 3 and 4.
Figure 2. GUI for hyperspectral seeds image calibration and segmentation. There are 4 modules with functions: load image, save calibrated image, display RGB image and segment image. The seeds are visible in the image panels in modules 3 and 4.
Sensors 23 04370 g002
Figure 3. GUI for hyperspectral seed image extraction.
Figure 3. GUI for hyperspectral seed image extraction.
Sensors 23 04370 g003
Figure 4. GUI for hyperspectral seed image spectrum and feature extraction. The colors indicate pixel vectors of sample pixels in the rice seed HSI, and the single blue spectrum is the mean pixel spectral vector of all the sample pixels.
Figure 4. GUI for hyperspectral seed image spectrum and feature extraction. The colors indicate pixel vectors of sample pixels in the rice seed HSI, and the single blue spectrum is the mean pixel spectral vector of all the sample pixels.
Sensors 23 04370 g004
Figure 5. Graphical user interface for seed image classification using 3D-CNN and DNN.
Figure 5. Graphical user interface for seed image classification using 3D-CNN and DNN.
Sensors 23 04370 g005
Figure 6. 3D-CNN rice seed classification architecture.
Figure 6. 3D-CNN rice seed classification architecture.
Sensors 23 04370 g006
Figure 7. DNN rice seed classification architecture.
Figure 7. DNN rice seed classification architecture.
Sensors 23 04370 g007
Figure 8. Classification map obtained from DNN classification of HSI of rice seeds for different high day/night temperature treatments.
Figure 8. Classification map obtained from DNN classification of HSI of rice seeds for different high day/night temperature treatments.
Sensors 23 04370 g008
Figure 9. Ground truth for different high day/night temperature treatment.
Figure 9. Ground truth for different high day/night temperature treatment.
Sensors 23 04370 g009
Figure 10. Classification map obtained from DNN classification of HSI of rice seeds for different exposure times for each treatment.
Figure 10. Classification map obtained from DNN classification of HSI of rice seeds for different exposure times for each treatment.
Sensors 23 04370 g010
Figure 11. Ground truth for different exposure times for each high day/night temperature treatment classes.
Figure 11. Ground truth for different exposure times for each high day/night temperature treatment classes.
Sensors 23 04370 g011
Figure 12. t-SNE visualization using two bands for the rice seed HSI from all treatments for (a) 204 h and (b) 240 h of high temperature exposure durations.
Figure 12. t-SNE visualization using two bands for the rice seed HSI from all treatments for (a) 204 h and (b) 240 h of high temperature exposure durations.
Sensors 23 04370 g012
Table 1. Temperature treatments classes of rice seeds.
Table 1. Temperature treatments classes of rice seeds.
Treatment ClassDay/Night Temperature °CNumber of Images
Control28/2340
High day/night temperature 1 (HDNT1)36/3240
High night temperature (HNT)28/2840
High day temperature (HDT)36/2340
High day/night temperature (HDNT2)36/2840
Table 2. Performance metrics used for 3D-CNN and DNN.
Table 2. Performance metrics used for 3D-CNN and DNN.
Performance MetricsFormula
Precision T P T P + F P
Recall T P T P + F N
F1-Score 2 * p r e c i s i o n * R e c a l l P r e c i s i o n + R e c a l
Average Accuracy (AA) T P + T N T P + T N + F P + F N
Overall Accuracy (OA) T P + T N P + N
Kappa (K) P o P e 1 P e
Table 3. Classification accuracy for different treatments of high day and night temperatures.
Table 3. Classification accuracy for different treatments of high day and night temperatures.
High Temperature Treatment Classes
ExposureHDTHDNT1HDNT2HNTControlAAOAK
1680.930.910.930.930.920.920.910.91
1800.910.90.890.920.890.910.90.89
2040.940.930.930.940.940.940.940.94
2160.920.910.910.90.930.920.920.91
2280.910.910.90.910.910.910.90.91
2400.920.920.920.930.920.920.910.92
Table 4. Classification accuracy for different exposure times to high temperatures (hours of exposure).
Table 4. Classification accuracy for different exposure times to high temperatures (hours of exposure).
High Temperature Exposure Time (Hours)
Classes168180204216228240AAOAK
HDNT10.890.880.880.890.890.890.890.890.88
HDNT20.910.90.920.90.890.90.90.890.88
HDT0.910.920.920.90.910.90.910.910.9
HNT0.910.920.920.90.910.910.910.90.89
Table 5. Classification results for different high day/night temperature treatments using the DNN architecture.
Table 5. Classification results for different high day/night temperature treatments using the DNN architecture.
168 h180 h204 h216 h228 h240 h
ClassesPrecisionPrecisionPrecisionPrecisionPrecisionPrecision
HDT0.9710.9210.990.98
HDNT10.920.960.850.960.970.86
HDNT20.950.90.930.910.960.86
HNT0.930.960.950.960.920.96
Control0.9610.940.980.980.96
Macro average0.950.960.920.960.960.92
Weighted average0.950.970.920.960.970.92
Classesrecallrecallrecallrecallrecallrecall
HDT0.9710.980.9611
HDNT10.970.910.860.960.950.86
HDNT20.880.950.890.950.960.84
HNT0.950.990.880.950.960.9
Control0.960.990.960.990.970.99
Macro average0.940.970.920.960.970.92
Weighted average0.950.970.920.960.970.92
Classesf1-scoref1-scoref1-scoref1-scoref1-scoref1-score
HDT0.9710.950.9810.99
HDNT10.940.940.850.960.960.86
HDNT20.910.920.910.930.960.85
HNT0.940.970.920.960.940.93
Control0.960.990.950.980.970.97
Accuracy0.950.970.920.960.970.92
Macro average0.940.960.910.960.970.92
Weighted average0.950.970.920.960.970.92
Table 6. Classification results for different exposure times for each temperature treatment using the DNN architecture.
Table 6. Classification results for different exposure times for each temperature treatment using the DNN architecture.
HDTHDNT1HDNT2HNT
ClassesPrecisionPrecisionPrecisionPrecision
1680.950.890.910.91
1800.990.910.930.98
2040.960.860.810.94
2160.860.830.860.89
2280.860.90.90.92
2400.980.940.780.91
Macro average0.940.890.870.93
Weighted average0.950.890.870.93
Classesrecallrecallrecallrecall
1680.960.830.830.92
1800.970.980.920.96
2040.960.870.880.92
2160.990.920.760.89
2280.90.820.90.96
2400.910.890.930.91
Macro average0.950.890.870.93
Weighted average0.950.890.870.93
Classesf1-scoref1-scoref1-scoref1-score
1680.950.860.870.91
1800.980.940.920.97
2040.960.870.840.93
2160.920.870.810.89
2280.880.860.90.94
Accuracy0.950.890.870.93
Macro average0.940.890.870.93
Weighted average0.950.890.870.93
Table 7. State-of-the-Art in rice seed HSI Classification.
Table 7. State-of-the-Art in rice seed HSI Classification.
YearAuthorsAlgorithmNumber ClassesOverall Accuracy
2021T. Gao et al. [14]Rice seed HSI classification using 3D-CNN (high temperature)297.5%
2021T. Gao et al. [14]Rice pixel based HSI classification (high temperature)294.21%
2019Z. Qui et al. [8]Regular rice seed HSI image classification using CNN487%
2018I.Hatnuntawech et al. [11]Regular rice seed HSI classification using ResNet-B691.09%
Proposed methodHigh temperature grown rice seed HSI using 3D-CNN691.33%
Proposed methodHigh temperature grown rice seed HSI using DNN694.83%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Díaz-Martínez, V.; Orozco-Sandoval, J.; Manian, V.; Dhatt, B.K.; Walia, H. A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures. Sensors 2023, 23, 4370. https://doi.org/10.3390/s23094370

AMA Style

Díaz-Martínez V, Orozco-Sandoval J, Manian V, Dhatt BK, Walia H. A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures. Sensors. 2023; 23(9):4370. https://doi.org/10.3390/s23094370

Chicago/Turabian Style

Díaz-Martínez, Víctor, Jairo Orozco-Sandoval, Vidya Manian, Balpreet K. Dhatt, and Harkamal Walia. 2023. "A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures" Sensors 23, no. 9: 4370. https://doi.org/10.3390/s23094370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop