Next Article in Journal
Field-Based Soybean Flower and Pod Detection Using an Improved YOLOv8-VEW Method
Previous Article in Journal
Erosive Rainfall Thresholds Identification Using Statistical Approaches in a Karst Yellow Soil Mountain Erosion-Prone Region in Southwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Phenotypic-Based Maturity Detection and Oil Content Prediction in Xiangling Walnuts

School of Technology, Beijing Forestry University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(8), 1422; https://doi.org/10.3390/agriculture14081422
Submission received: 19 July 2024 / Revised: 12 August 2024 / Accepted: 20 August 2024 / Published: 22 August 2024
(This article belongs to the Section Digital Agriculture)

Abstract

:
The maturity grading of walnuts during harvesting relies on experience. In this paper, walnut images in a natural environment were collected to construct a dataset, and deep learning algorithms were utilized to combine walnut internal physical and chemical indicators to carry out research on walnut maturity detection methods and further research on walnut oil content prediction by combining walnut images with walnut oil content indicators. The main contents of this paper include collecting walnut images in a natural environment, constructing datasets, and using deep learning algorithms combined with internal physical and chemical indexes of walnuts to study walnut maturity detection and oil content prediction methods. First, two walnut image acquisition schemes were designed, and a total of 9504 images were collected from 23 August to 21 September 2021. The dataset was expanded to 18,504 images through data preprocessing and image enhancement. A self-supervised Gaussian attention network (GATCluster) walnut ripeness detection method based on image clustering is proposed to develop ripeness criteria through unsupervised clustering, and the accuracy of the criteria is verified by analysis of variance (ANOVA). The maturity detection accuracy of the test set of 1500 images is 88.33%. Secondly, a walnut oil content prediction method based on improved ResNet34 is proposed. The feature extraction capability is improved by introducing the Squeeze-and-Excitation Networks (SENet) channel attention mechanism and the convolutional self-attention module. The prediction results on 50 images show that the root mean square error, average absolute percentage error, and regression coefficient are 2.96, 0.103, and 0.8822, respectively. The experiments show that the method performs well in predicting the oil content of walnuts at different maturity levels.

1. Introduction

Walnuts, belonging to the genus Juglans in the family Juglandaceae, are one of the world’s “four major nuts”. Currently, walnut oil in China is primarily extracted through the physical pressing of walnuts after the green husks are removed [1]. The green husks can be processed into cosmetics, and the entire walnut has high economic value, indicating significant development potential for the walnut industry. However, the primary challenge constraining the development of China’s walnut oil industry is the heavy reliance on traditional experience for the harvesting process, which follows the solar terms. Therefore, there is an urgent need for walnut maturity classification to guide frontline farmers in harvesting, thereby improving the oil yield, quality, and production of walnut oil.
With the continuous improvement of computing power and the development of deep learning, this idea has become feasible. Convolutional neural networks (CNNs), as representative deep learning networks, have rapidly developed in recent years and have demonstrated significant contributions in agriculture and forestry. For example, Mohammadreza et al. proposed a cross-instance-guided contrastive clustering method that considers cross-sample relationships to increase the number of positive pairs [2]. Barburiceanu et al. developed an algorithm based on CNNs to classify leaf diseases by extracting leaf vein features [3]. Additionally, Lin et al. designed an anchor-free convolutional neural network for multi-class agricultural pest detection [4]. A multivariate prediction model based on near-infrared spectroscopy was used to nondestructively assess the internal quality and ripeness of peaches by Minas et al. of Colorado State University, USA [5]. Sripaurya et al. of Prince of Songkhla University, Thailand, used a portable near-infrared equipment device to compare the input data to a predetermined threshold corresponding to the ripeness status of bananas, and by comparing the input data to the thresholds with different levels of minima between the input data and the thresholds banana ripeness detection was performed [6]. Khisanudin et al. (2020) from Ahmad Dahlan University used an HSV color space model combined with a plain Bayesian classification algorithm to identify dragon fruit ripeness with an average accuracy of 86.6% [7]. Syaifuddin (2020) et al. (2020) from the State University of Yogyakarta, Indonesia, extracted palm phenotypic features and built a fuzzy model to detect palm ripeness with an accuracy of 71.4% [8]. Guo et al. study by comparing multispectral and hyperspectral imagery and multiple machine learning methods [9]. Full hyperspectral spectra combined with random forest (RF) were found to be the most accurate in predicting corn seed yield under high vegetation cover. as well as the application prospect in the field of aerial remote sensing is also very great [10].
Furthermore, with the development of regression algorithms, they have been applied not only in data mining and data analysis but also in fields such as agriculture and forestry, biology, and mechanical engineering. For instance, Silva et al. proposed a method using artificial neural networks (ANNs) and regression models to model tree height and eucalyptus volume yield in the agricultural and pastoral systems (AGP) of the Zona da Mata Mineira region in Brazil, aiming to predict eucalyptus volume yield [11]. The experimental results indicated that this method could be used for height and volume estimation of eucalyptus in the AGP of the study area. Fiorentino et al. from the Polytechnic University of Marche, Italy, proposed a framework including a region suggestion CNN for head localization and centering and a regression CNN for accurate depiction of HC [12]. Tang et al. from Vanderbilt University, USA, proposed the BUSN algorithm for body part regression [13]. The method requires no labeling and uses the prediction results as a supervised scheme to train the model. Doan et al. from Sungkyunkwan University, South Korea, proposed a self-guided ordered regression neural network (SONNet) that automatically and robustly performs kernel segmentation and classification simultaneously [14]. He et al. at the University of Illinois, USA, proposed a method for designing a cascaded fully convolutional regression network (C-FCRN) using a density regression model (DRM) and using an auxiliary convolutional neural network (AuxCNN) to assist in the training of the intermediate layer of the C-FCRN, which is used to automatically count cells in microscope images, with better results than traditional algorithms [15].
The recognition of walnut maturity characteristics shares significant similarities with the above examples [16,17]. Therefore, the objective of this paper is to utilize and improve CNN and regression algorithm-related technologies to address the current lack of specific walnut maturity standards, which leads to indiscriminate harvesting during the harvest period and consequently low oil yield. Taking Xiangling walnuts from Huanglong County, Yan’an City, Shaanxi Province as the research subject, this study aims to investigate walnut maturity detection and oil yield prediction. First, a deep clustering algorithm will be used to cluster walnut images to establish walnut maturity standards [18]; then, variance analysis will be conducted on the physical and chemical indicators of walnuts to verify the maturity standards. Subsequently, the performance of the walnut maturity detection model will be evaluated using the maturity standards. Finally, a regression algorithm based on deep learning will be proposed to predict the walnut oil yield.

2. Materials and Methods

2.1. Materials

2.1.1. Walnut Original Image Acquisition

Combined with the cultivation method of Xiangling walnuts and the specific growth situation of Xiangling walnuts in Huanglong County, Yan’an City, Shaanxi Province, two image acquisition schemes were formulated, one of which was to collect images of walnuts in their natural hanging state, and the other to collect images of walnuts that needed to be measured for internal quality parameters.
Walnut Image Acquisition Program I: (1) The image acquisition time was from 23 August to 21 September 2021, and the acquisition location was in Yan’an City, Shaanxi Province, with the specific location at latitude 35°43′43′′ N and longitude 109°53′50′′ E. (2) After the fruit entered the ripening stage, we used a cell phone to take pictures, the cell phone was parallel to the cut surface of the characteristic parts that can show the maturity degree of the fruit, e.g., the fruit with a crack in the epidermis should be parallel to the cut surface of the cracked part; the exposed fruit should be parallel to the cut surface of the exposed part; and, if the fruit is intact, it will be parallel to the side of the fruit at any angle. (3) We used this way of shooting every other day, each time from 8:00 a.m. to 11:00 a.m. and 2:00 p.m. to 5:00 p.m., shooting 500 images per day.
Walnut Image Acquisition Program II: (1) In 21 May 2021, Xiangling walnuts entering the fruiting period were manually marked, totaling 252 fruits at the same time; (2) fruit entering the ripening period, from 23 August 2021 onwards, were photographed every three days, each time 36 fruits were selected, marking the fruit number 1–36, in order to capture the 36 fruits in the state of hanging branches and an image of walnuts on a white board. (3) Shooting walnut images in the hanging state, the same method as in Program I. (4) While shooting walnut images on a white board, we took walnuts in the order of their serial number and placed them on the white board. We kept the walnuts in their natural environment and photographed them with their cut side facing up on the white board. The camera of the cell phone and the white board were in the horizontal position; the distance between the camera and the walnuts was 10 cm; and the walnuts were placed in the same position every time.
The collected walnut images were named by their photographed serial number + date + fruit state, and the images were stored in JPG format with a resolution of 3072 pixels × 4096 pixels. A total of 9504 images of the original walnut dataset were finally collected and stored in the archive according to their shooting date.
An example of walnut dataset images is shown in Figure 1.

2.1.2. Construction of Walnut Maturity Detection Data Set

In order to improve the image diversity and generalization ability of the walnut dataset, image processing is carried out in this paper. First, the image brightness is uniformly adjusted to avoid the brightness difference affecting the training effect. Second, the adaptive histogram equalization (AHE) algorithm is used to enhance the image details and improve the feature extraction ability. In addition, the image is rotated at different angles by rotational transformation to increase the data diversity and improve the model generalization ability. Pretzel noise is also added to randomly transform pixel points into black and white pixels to suppress high-frequency features of the image, avoid overfitting, and improve the accuracy and robustness of the model. Finally, the leaf segmentation image is used to simulate masking and increase the number of masking images to improve the feature extraction ability of the model for the presence of masked fruits. The principle of masking is to incompletely mask the fruit maturity features and to provide full coverage for severely masked fruits, retaining only the most intact walnuts. Through the above processing, the walnut image dataset is optimized to provide a better basis for model training.
The original dataset of 9504 images is expanded using image enhancement techniques. We adopt image rotation to expand 3000 images, add pretzel noise to expand 3000 images, and simulate occlusion to expand 3000 images. And the brightness and contrast of all images are unified. Finally, the walnut ripeness detection dataset is established, which consists of 18,504 images. The walnut images after image enhancement [19,20] are shown in Figure 2.
After obtaining the dataset according to Walnut Image Acquisition Scheme I it was divided into training and test sets. Among them, the training set is 16,752 sheets and the test set A is 1500 sheets. Due to the maturity detection method proposed in this paper, internal quality parameters of walnuts are required. Only then can the reasonableness of the developed maturity standard be verified. Therefore, according to the acquisition program II, the 252 walnut images collected in the hanging state constitute the test set B as the walnut maturity standard validation data. The final test set totaled 1752 walnut images.

2.1.3. Construction of the Walnut Oil Content Prediction Dataset

On 21 May 2021, 252 fruits of Xiangling walnuts sitting at the same time were manually labeled when the fruits entered the fruit-sitting stage. After waiting for the fruit to enter the ripening stage, 36 fruits were selected each time and labeled with fruit serial numbers 1–36. 36 images of walnuts with fruits in the hanging state and on the white board were taken in sequence. Finally, 504 original walnut oil content prediction data sets were acquired. The 252 collected fruits were grouped by date. The walnuts collected on the same date were divided into three groups by serial number for parameter measurement. The measurements corresponded to the 504 walnut images in date and serial number order [21,22].
Afterwards, the dataset was expanded by scaling the image size, randomly rotating, and adding noise. A total of 504 images corresponding to the oil content of 252 walnut fruits were obtained for the walnut oil content prediction dataset. Among the 504 images, 100 walnut images with a uniform distribution of three maturity categories were selected as the test set, and the remaining 404 images were used as the training set. The image of the walnut oil content prediction dataset after the image enhancement process is shown in Figure 3.
The 404 images are among the original training set images. Image rotation expands to these 1596 images. Adding salt and pepper noise further expands the 404 images. The final training set has 2404 images, corresponding to the oil content of 202 walnuts. The test set has 100 images, corresponding to the oil content of 50 walnuts. The walnut oil content prediction dataset is thus completed. There are 2504 images in total.

2.2. Methods

This chapter consists of two parts: the detection method of walnut fruit maturity and the design of the detection method of walnut fruit oil content. The next section will be an introduction to the construction principles of the two methods.

2.2.1. Algorithm for Clustering Walnut Maturity

By characterizing the images in the dataset. As the walnut fruit’s green skin color does not change significantly during the ripening period, the walnut fine cracks are not obvious. So, there are high requirements for the model’s feature extraction ability. In this paper, the deep clustering model GATCluster [23] is selected as a network to solve the walnut ripeness clustering problem. The GATCluster model clustering process is shown in Figure 4.
Walnuts have similar phenotypic features such as rind color, shape, and size. The difference in phenotypic features of walnuts between similar maturity levels is small, so the requirements for the model’s ability to extract features are more stringent. The task of the attention module is to find the most discriminative local regions. It is loaded into the model. The GATCluster attention feature module is shown in Figure 5.
Taking the features extracted from ResNet34 as input, a fully connected layer is used to estimate the parameter Φ. The model focuses on discriminating localized regions by multiplying each channel of the convolved features with the attention map. The GATCluster [24] network locates discriminative localized regions by means of a two-dimensional Gaussian kernel. The Gaussian kernel function K(u; Φ) is shown in Equation (1).
K u ; Φ = e 1 α ( u μ ) T 1 u μ , x = 1 , , H ,   and   y = 1 , , W ,
where u = [x, y]T denotes the coordinate vector and Φ = [µ, Σ] denotes the parameters of the Gaussian kernel, μ = [ μ x , μ y ] T is the mean vector defining the most discriminative position, Σ ∈ R2×2 is the covariance matrix defining the shape and size of the localized region, α is a predefined hyperparameter, and H and W are the height and width of the attention graph. The weighted features are mapped to the attention features using a global pooling layer and a fully connected layer.
GATCluster uses a soft attentional loss function with the formula shown in Equation (2).
L A ( l i a , l ̑ i a ) = 1 k h = 1 k l ̑ i h a log ( l ̑ i h a ) ( 1 l ̑ i h a ) log ( 1 l i a ) l ̑ i a = l i h 2 / z h h l i h 2 / z h , h = 1 , , k z h = j = 1 M l j h , h = 1,2 , , k
where l i   a is the output of the attention module, l i   a ^ is the feature label of the regression, and z h is the clustering assignment frequency to prevent null clustering from occurring.
In Equation (2) the feature label l i   a ^ encourages high scoring regions of the current image and suppresses low scoring regions of the image, thus making l i   a a more confident version. With this operation, the image region that the attention module focuses on is more salient to the image semantics and more discriminative compared to other regions of the image.
(1)
Algorithm for clustering walnut maturity
Since traditional algorithms are less effective at clustering images in natural environments, the GATCluster algorithm draws on a theorem introduced by DAC [25,26]. The theorem suggests that clustering can be redefined as a binary classification problem. The similarities and differences between two fruits are measured, and then it is determined whether they belong to the same maturity level. For each fruit xi, the labeling feature l i = f ( x i , w ) , where f ( · , w ) is a mapping function with parameter w. The parameter w obtains the maturity result by minimizing the objective function. Based on the above theory, GATCluster formulates the clustering problem as an optimization problem with probabilistic and non-empty clustering constraints, as shown in Equation (3).
min E w = i , j N L r i j , l i l i 2 l j l j 2 j = 1 N l i l i , s . t . i   l i 1 = 1 ,   0 l i h 1 ,   h = 1 , , k h   p h = 1 N i = 1 N l i h ( n o n e m p t y c l u s t e r )
where in the non-empty constraint ph denotes the frequency of assigning N samples to the h-th cluster.
The GATCluster algorithm ensures that the learned features are one-hot encoded by the labeled feature theorem, where each bit represents a cluster, and all predefined K clusters are non-empty [27]. The labeling feature theorem is shown in Equation (4).
i , j , l i E k , l i l j r i j = 0 , l i = l j r i j = 1 , l i i = 1 N = k ,
By combining the label feature theorem and the Gaussian attention mechanism phase introduced above, Equation (3) can be expressed as Equation (5), which further optimizes the representation of the clustering problem. The formula is shown in Equation (5).
min E ( w ) = i , j = 1 N L R ( r i j , l i , l j ) + i = 1 N ( α 1 L T ( l i ) + α 2 L E ( l i ) + α 3 L A ( l i , l i a ) )
where L R and L T correspond to the first and second terms in Equation (5), respectively. L E is the nonempty constraint satisfied, and L A is the loss function of the Gaussian attention mechanism introduced above. α 1 , α 2 , and α 3 are hyperparameters that balance different loss weights. The clustering results are obtained by minimizing E w .

2.2.2. Walnut Maturity Test Platform Setup

The walnut clustering algorithm for walnut ripeness based on GATCluster uses 16,752 walnut images with different ripeness levels from the training set in the walnut ripeness detection dataset as the model training data. The experimental equipment is a desktop computer with the system of Windows 10, and the hardware configuration of the computer is shown in Table 1.
The training of walnut ripeness detection algorithm GATCluster was completed using a Python 3.8.3-based Pytorch 1.7 platform. The learning rate was set to 0.001, momentum was set to 0.9, batch size was 32, and weight decay was 0.0005 during the training of walnut ripeness clustering algorithm of GATCluster.

2.2.3. Evaluation Mechanism for Walnut Maturity Criteria

This section is designed to evaluate the accuracy of the GATCluster deep clustering algorithm for walnut maturity clustering criteria. The algorithm clustering results were first utilized as a ripeness criterion to delineate the images of test set B. Afterwards, the physicochemical indexes of walnut fruits embodying ripeness within the grouping were measured and analyzed by ANOVA to prove whether there were significant differences. From there, the reasonableness of the algorithm’s criteria is verified. Finally, the specific performance of the detection method is quantitatively evaluated in the direction of four indicators: average precision, recall, F1 score, and accuracy [28].
(1)
Determination of physical and chemical indicators of walnut maturity
The 252 fruits collected in Section 2.1.2 were grouped by date. The walnut fruits collected on the same date were divided into three groups for parameter measurements in serial number order and were used as control groups for each other. The walnut physical traits and internal quality parameters to be measured were as follows: fruit transverse diameter (mm), fruit longitudinal diameter (mm), fruit weight (with green skin, in g), nut weight (g), protein content, water content, oil content, soluble sugar content, and crude fat content.
(2)
Results of validated clustering method for physical and chemical indexes of walnuts
Walnut fruits of different maturity categories differed in phenotypic characteristics, but it remains to be tested whether there are differences in physicochemical indicators. The images of test set B were clustered using the GATCluster clustering algorithm to classify walnut ripeness into three categories. ANOVA was used to determine whether there were significant differences in the physicochemical indexes of the three groups of fruits as expected.
(3)
Quantitative analysis for maturity detection
In this study, four evaluation indexes, including average precision (Prec, P), recall (Rec, R), F1 score (F1-score, F1) and accuracy (Acc), were used to quantitatively evaluate the walnut ripeness detection methods. The specific calculation formulas are as follows:
P r e c = T P T P + F P ,
R e c = T P T P + F N ,
F 1 = 2 × P × R ( P + R ) ,
A C C = T p T o t a l ,
where Tp denotes the case where the model predicts a positive sample and the actual sample is also positive; Fp denotes the case where the model predicts a positive sample, but the actual sample is negative; and FN denotes the case where the model predicts a negative sample, but the actual sample is positive. Total denotes the total number of data.

2.2.4. Algorithm Design for Walnut Oil Content Prediction

Since the differences in walnut oil content among different maturity levels are mainly manifested in the pericarp state, the phenotypic characteristics of walnut fruits at adjacent maturity levels are extremely similar. Therefore, it is required that the model has a strong feature extraction ability for color and crack [29].
And the residual structure of the ResNet34 network [30,31] has the characteristic of maximizing the retention of image feature information. Therefore, the ResNet34 network is chosen to be used as the base network and improved for the above problems. The individual components of the algorithm are described next.

2.2.5. Channel Attention Mechanism

Since the model needs to solve the problem of similar phenotypic characteristics of walnuts with similar maturity, the original feature extraction network of ResNet34 could not meet the requirements of practical applications, so the feature extraction network was improved. In this paper, a channel attention module (SENet) [32] is added after each set of residual blocks of the original ResNet network. The feature correction uses global information to enhance the useful feature information and dilute the useless features. The workflow diagram of the SENet network is shown in Figure 6.
The first step of SENet is to compress the features of each channel as a descriptor for that channel, using average pooling to average the features in the channel with the formula shown in Equation (10).
Z c = F s q u c = 1 H × W i = 1 H j = 1 W u c i , j   ,
where Z c denotes the c-th value of the compressed 1 × 1 × C, C is the number of channels, H is the height of the image, W is the width of the image, and u c   ( i , j ) is the value of the c-th layer, the i-th row, and j-th column of the output of the previous layer transformation.
The second step of the excitation is to integrate the information of the 1 × 1 × C vectors by using the structure of a fully connected layer, Relu activation function, and sigmoid [33]. The formula is shown in Equation (11).
F e x z , W = σ g z , W = σ W 2 δ W 1 z ,
where δ refers to the Relu activation function, W 1 refers to the first fully connected layer, W 2 refers to the second fully connected layer, z is the result after global average pooling, and σ refers to the sigmoid function.

2.2.6. Convolutional Self-Attention Module

Since the cracks in the skin of walnuts just transferred from Maturity i to Maturity ii are not obvious, this places a high demand on the feature extraction capability of the network. Therefore, the convolutional self-attention (Acmix) module [34] is added to the original network to improve the performance.
For the convolution operation, it can be split into two steps: transformation and offset aggregation. In the first stage, the input features are linearly projected with weights along a certain position. In the second stage, the projected feature map is positionally transformed and aggregated. Its two-stage formulae are shown in Equations (12) and (13), and the flowchart of convolution operation is shown in Figure 7.
g ˜ i j ( p , q ) = k p , q f i j ,
g i j ( p , q ) = S h i f t ( g ˜ i j ( p , q ) , p [ k 2 ] , q [ k 2 ] ) , g i j = p , q g i j ( p , q ) ,
For the self-attention mechanism, self-attention allows the model to focus on more important regions in a larger content space. The multi-head self-attention mechanism can also be operated in two phases. Firstly, in the first stage, the query, key, and value vectors are computed by a 1 × 1 convolutional transform. In the second stage, the attention weights are computed, and the aggregation of the value vector weight matrices is computed. Its two-stage formula is shown in Equations (14) and (15), and the flowchart of the self-attention mechanism is shown in Figure 8.
q i j ( l ) = W q ( l ) f i j , k i j ( l ) = W k ( l ) f i j , v i j ( l ) = W v ( l ) = W v ( l ) f i j ,
g i j = | · | l = 1 N ( a , b N k ( i j ) A ( q i j ( l ) , k a b ( l ) ) v a b ( l ) ,
where |·| is the concatenation of the outputs of the N attentions; W q ( l ) , W k ( l ) , and W v ( l ) are the projection matrix of the query, key, and value vectors, respectively; Nk(i, j) denotes the pixel-localized region of spatial extent k centered on (i, j); and A ( q i j l , k a b ( l ) ) refers to the features in Nk(i, j).

2.2.7. Walnut Oil Content Prediction Test Platform Setup

Different walnut oil content prediction algorithms based on improved ResNet34 used images from the walnut oil content prediction dataset as experimental data, and the experimental machine was a desktop computer with the same configuration as in Section 2.2.2.
The improved Residual Networks 34 (ResNet34) algorithm was implemented using the Python 3.8.3-based Pytorch 1.7 platform. The learning rate was set to 0.001, the momentum was set to 0.9, the batch size was 8, the weight decay was 0.0005, the discard ratio was 0.5, and the momentum was 0.9 during the training process of the improved ResNet34.

2.2.8. Evaluation Indexes of Walnut Oil Content Prediction Algorithm

The root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of determination (R2) were used as the evaluation indexes for the testing process of the algorithms for predicting the oil content of walnuts of different maturity [35,36]. RMSE is the square root of the difference between the sample and the mean value, and it is then divided by the total number of samples. The smaller the value of RMSE, the higher the accuracy of the model prediction. R2 is expressed as the regression curve fitting R2, which is the average of the absolute percentage errors of each sample. The smaller the value of the average absolute percentage error, the better the model prediction. The root mean square error, and the mean absolute percentage error, and the coefficient of determination are shown in Equations (16)–(18).
R M S E = i = 1 N ( y y ˜ ) 2 N ,
M A P E = 100 % N i = 1 n | y ˜ y y | ,
R 2 = 1 i = 1 n ( y ˜ y ) 2 i = 1 n ( y ¯ y ) 2 ,
where N represents the total sample size, y represents the true value, and y ~ represents the predicted value. RMSE is the root mean square error, MAPE is the mean absolute percentage error, and R2 is the coefficient of determination.

3. Results

3.1. Walnut Maturity Test Results

3.1.1. Walnut Maturity Clustering Results

The unsupervised clustering algorithm GATCluster is trained using 16,752 images of different maturity levels in the training set. After the training was completed, GATCluster classified the walnut images with different maturity levels into three categories: maturity category one, maturity category two, and maturity category three. The phenotypic features of walnuts in the three maturity categories were analyzed. Develop walnut maturity criteria. The clustering result of the GATCluster algorithm is shown in Figure 9. The walnut maturity criteria developed by the GATCluster algorithm are shown in Table 2.

3.1.2. Results of the Validated Clustering Method for Physical and Chemical Indexes of Walnut

Walnut fruits of different maturity categories differed in phenotypic characteristics, but it remains to be tested whether there are differences in physicochemical indicators. The GATCluster clustering algorithm was used to cluster the images corresponding to the 252 walnuts with measured physicochemical indexes in test set B, and the walnuts were classified into three maturity categories. The clustering results were quantitatively analyzed by internal quality parameters of walnuts measured by physicochemical experiments. An analysis of variance (ANOVA) was performed using SPSS software to analyze the physical traits and internal nutrient content of walnut fruits in the three categories of maturity.
Physicochemical experiments included measurement of fruit shape parameters and extraction of walnut samples for nutrient compositional analysis.
Among the quality parameter classifications obtained by testing after the physicochemical experiments are shown in Table 3.

3.1.3. Walnut Ripeness Recognition Detection

The walnut maturity level developed in Section 3.1.2 was used as a criterion. Test the unsupervised clustering algorithm trained in Section 3.1.1 on test set A. Ripeness detection is performed using 1500 walnut images from test set A, which includes 500 images each of ripeness category I, ripeness category II, and ripeness category III. The maturity recognition results of some walnut fruit sample images are shown in Figure 10.

3.1.4. Quantitative Analysis for Maturity Detection

In this section, a total of 1500 images from test set A is used for cross-validation and quantitative evaluation of the established improved GATCluster model. The results obtained for the specific evaluation metrics are shown in Table 4.

3.2. Results of the Walnut Oil Content Prediction Algorithm

In order to verify the effectiveness of the improved ResNet34 model proposed in this paper in predicting the oil content of walnut fruit images with different maturity levels, 50 images from the test data were used for oil content prediction, and the improved ResNet34 model was analyzed and evaluated from a quantitative point of view. Some of the image prediction results are shown in Figure 11, and the model prediction scores are shown in Table 5.

4. Discussion

4.1. Evaluation of Walnut Ripeness Detection Algorithm Results

Quantitative evaluation of walnut ripeness detection results using a confusion matrix. The smaller the sum of non-diagonal elements in the confusion matrix, the higher the detection accuracy. As shown in the confusion matrix, among 1500 walnut images with different maturity levels, 1325 were correctly detected and 175 were incorrectly detected.
In addition, false detection mainly occurs between neighboring maturity levels, such as between Maturity i and Maturity ii and between Maturity ii and Maturity iii. For example, among the walnut images in Maturity i, 40 were misidentified as Maturity ii; among the walnut images in Maturity ii, 24 were misidentified as Maturity i and 46 were misidentified as Maturity iii; and among the walnut images in Maturity iii, 15 were misidentified as Maturity i and 50 were misidentified as Maturity ii. The reason for these misidentifications is that the differences in phenotypic characteristics of walnuts during the maturity transition are not obvious. The results of walnut maturity detection are presented in the form of a confusion matrix, as shown in Figure 12.
The method can detect the maturity of walnut images with different maturity levels with an overall accuracy of 88.33%, in which the recognition precision of maturity category one is 92.18%, the recall is 92%, and the F1 score is 92.09%. The detection effect of Maturity iii is slightly lower than that of Maturity i. The recognition precision, recall, and F1 score are 90.44%, 87%, and 88.69%, respectively. Maturity i had the worst detection effect, significantly lower than the other two maturity stages, with recognition precision, recall and F1 scores of 82.69%, 86% and 84.31%, respectively. The reason that the recognition precision of Maturity ii was lower than that of Maturity i and iii was that the walnut images of Maturity ii were easily confused with those of Maturity i and maturity category III, whereas the phenotypic characteristics of walnuts in Maturity i and iii were more different and difficult to confuse. According to the walnut ripeness detection results in Table 5 and Figure 10, it can be seen that the method in this paper can effectively detect the ripeness of walnut fruits.

4.2. Evaluation of the Results of Walnut Oil Content Detection Algorithm

As can be seen in Table 6, the mean absolute percentage error (MAPE) of the improved ResNet34 model for predicting the oil content of 50 walnut images with different maturity levels is 0.103, the root mean square error (RMSE) is 2.96, and the coefficient of determination (R2) is 0.8822. This indicates that the regression prediction of oil content of walnuts with different maturity levels with the improved ResNet34 model has a low margin of error. Therefore, it is proved that the model can accurately recognize the oil content of walnuts in the state of hanging branches, which meets the needs of the production line.

4.2.1. Ablation Experiment

In order to verify the performance of the improved ResNet34 model, a total of four sets of ablation experiments were set up with the original ResNet34 model, ResNet34 + SeNet model, ResNet34 + Acmix model, and ResNet34 + SeNet + Acmix model, respectively, and a total of 50 images of walnuts in the hanging state of the test data were subjected to different maturity walnut oil content regression prediction. The four groups of models were compared in terms of root mean square error, average absolute percentage error, and coefficient of determination, and the four groups of ablation experiments are shown in Table 6.
The experiments showed that after the SENet channel attention module and ACmix module were introduced into the ResNet34 model separately, both of them showed the effect of increasing the R2 score and decreasing the RMSE score and MAPE. This indicates that the SENet channel attention module effectively enhances the model’s ability to extract phenotypic features, thus improving the regression accuracy.
When the channel attention SENet module and ACmix module are introduced into the ResNet model at the same time, the R2 score rises by 0.2622, and the RMSE score and MAPE score decrease by 5.616 and 0.083, respectively. It is a better performance than the introduction of one module alone. The results show that the improved ResNet34 model in this paper can effectively predict the oil content of walnuts with different maturity levels.

4.2.2. Comparative Experiment

In order to verify the performance of the improved ResNet34 for the prediction of the oil content of walnuts at different maturity levels, the improved ResNet34 network is compared with the VGG series of networks (VGG16, VGG19) [37,38], ResNet18, and ResNet50 for the comparison experiments in this section. The four groups of comparative models were used to predict the oil content regression of walnuts with different maturity levels on 50 images of walnuts selected in the hanging branch state. The five groups of models were compared on three evaluation indexes: root mean square error, average absolute percentage error, and coefficient of determination. The five groups of comparison experiments are shown in Table 7.
As can be seen from Table 7, the improved ResNet34 network proposed in this paper achieves optimal results in terms of RMSE, MAPE, and R2 scores. The lowest RMSE value indicates that the gap between the predicted value and the real value of the improved ResNet model proposed in this paper is the smallest; the R2 value of 0.8822 indicates that 88.22% of the walnut images with different maturity can be predicted with the model for oil content, and the score closest to one indicates that the fitting effect of the method proposed in this paper is more accurate; the smaller the MAPE is, the better the model is; and the minimum MAPE score also proves that the improved ResNet34 model proposed in this paper has a better regression effect compared with other models.

5. Conclusions

In response to the lack of a walnut maturity standard in China’s walnut industry, this paper proposes a walnut maturity detection method based on GATCluster and a walnut oil content prediction method based on improved ResNet34, using datasets from real production bases. The main results are as follows:
We constructed datasets for walnut maturity detection and walnut oil content prediction using image enhancement techniques like rotation, contrast adjustment, and noise addition. For maturity detection, a GATCluster-based method was used due to the subtle changes in features like green skin color and fine cracks. The method achieved an 88.33% accuracy on 1500 test images. For oil content prediction, an improved ResNet34 model was designed for regression analysis. It predicted the oil content of walnut images with different maturity levels, achieving an RMSE of 0.96, an MAPE of 0.0083, and an R2 score of 0.8822. Ablation and comparison experiments validated the method’s effectiveness.
In conclusion, a non-contact walnut fruit maturity and oil content prediction method is proposed in this paper. It has been validated to achieve considerable accuracy, possesses good application value, and has a certain improvement on the current experience-dependent predicament of walnut harvesting.
However, despite the progress made in the detection method in this paper, further improvements are still needed. Future research will focus on the following aspects:
Dataset Enrichment: The current datasets are from the 2021 Shannon walnuts from Huanglong County, Yan’an City, Shaanxi Province. To enhance robustness, walnut images from consecutive years and varied climatic conditions are necessary. As well as attempting to expand the training set of different varieties of walnut fruit. Attempts were made to use other walnut varieties for training set expansion (e.g., Hanfeng walnut, Daixiang walnut) to enhance the generalizability of the method across different varieties of walnuts, as well as obtaining photos of walnuts obscured by leaves in natural situations to improve the model’s recognition ability.
Comprehensive Feature Collection: Current studies use single-angle images, which do not capture the complete phenotypic characteristics of the walnut. Utilizing panoramic cameras to collect full-feature information about the walnut sphere will improve the accuracy of maturity and oil content predictions.
Enrichment of shooting angles: when acquiring photos for this dataset, the walnut fruit crack was intentionally placed in the center of the frame. The original intent was to show more detail about the crack, but it may have led to the introduction of potential bias. Subsequent shoots may consider acquiring more photos of walnuts at more natural angles to increase the richness of the dataset.

Author Contributions

Conceptualization, F.C.; methodology, Y.Y.; software, Y.Y. and P.G.; validation, Y.Y. and P.G.; formal analysis, Y.Y.; investigation, Y.Y.; resources, F.C.; data curation, Y.Y.; writing—original draft preparation, P.G.; writing—review and editing, P.G. and X.Z.; visualization, P.G.; supervision, F.C.; project administration, F.C.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Geng, S.; Ning, D.; Ma, T.; Chen, H.; Zhang, Y.; Sun, X. Comprehensive Analysis of the Components of Walnut Kernel (Juglans regia L.) in China. J. Food Qual. 2021, 2021, 9302181. [Google Scholar] [CrossRef]
  2. Sadeghi, M.; Hojjati, H.; Armanfard, N. C3: Cross-Instance Guided Contrastive Clustering. arXiv 2023, arXiv:2211.07136. [Google Scholar] [CrossRef]
  3. Barburiceanu, S.; Meza, S.; Orza, B.; Malutan, R.; Terebes, R. Convolutional Neural Networks for Texture Feature Extraction. Applications to Leaf Disease Classification in Precision Agriculture. IEEE Access 2021, 9, 160085–160103. [Google Scholar] [CrossRef]
  4. Jiao, L.; Dong, S.; Zhang, S.; Xie, C.; Wang, H. AF-RCNN: An Anchor-Free Convolutional Neural Network for Multi-Categories Agricultural Pest Detection. Comput. Electron. Agric. 2020, 174, 105522. [Google Scholar] [CrossRef]
  5. Minas, I.S.; Blanco-Cipollone, F.; Sterle, D. Accurate Non-Destructive Prediction of Peach Fruit Internal Quality and Physiological Maturity with a Single Scan Using near Infrared Spectroscopy. Food Chem. 2021, 335, 127626. [Google Scholar] [CrossRef]
  6. Sripaurya, T.; Sengchuai, K.; Booranawong, A.; Chetpattananondh, K. Gros Michel Banana Soluble Solids Content Evaluation and Maturity Classification Using a Developed Portable 6 Channel NIR Device Measurement. Measurement 2021, 173, 108615. [Google Scholar] [CrossRef]
  7. Khisanudin, I.S. Dragon Fruit Maturity Detection Based-HSV Space Color Using Naive Bayes Classifier Method. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 771, p. 012022. [Google Scholar]
  8. Syaifuddin, A.; Mualifah, L.N.A.; Hidayat, L.; Abadi, A.M. Detection of Palm Fruit Maturity Level in the Grading Process through Image Recognition and Fuzzy Inference System to Improve Quality and Productivity of Crude Palm Oil (CPO). In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1581, p. 012003. [Google Scholar]
  9. Comparison of different machine learning algorithms for predicting maize grain yield using UAV-based hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 103528. [CrossRef]
  10. Kumar, V.; Garg, M.L. Deep learning techniques and their applications: A short review. Biosci. Biotechnol. Res. Commun. 2018, 11, 699–709. [Google Scholar] [CrossRef]
  11. Silva, S.; De Oliveira Neto, S.N.; Leite, H.G.; De Alcântara, A.E.M.; De Oliveira Neto, R.R.; De Souza, G.S.A. Productivity Estimate Using Regression and Artificial Neural Networks in Small Familiar Areas with Agrosilvopastoral Systems. Agrofor. Syst. 2020, 94, 2081–2097. [Google Scholar] [CrossRef]
  12. Fiorentino, M.C.; Moccia, S.; Capparuccini, M.; Giamberini, S.; Frontoni, E. A Regression Framework to Head-Circumference Delineation from US Fetal Images. Comput. Methods Programs Biomed. 2021, 198, 105771. [Google Scholar] [CrossRef]
  13. Tang, Y.; Gao, R.; Han, S.; Chen, Y.; Gao, D.; Nath, V.; Bermudez, C.; Savona, M.R.; Bao, S.; Lyu, I. Body Part Regression with Self-Supervision. IEEE Trans. Med. Imaging 2021, 40, 1499–1507. [Google Scholar] [CrossRef]
  14. Doan, T.N.; Song, B.; Vuong, T.T.; Kim, K.; Kwak, J.T. SONNET: A Self-Guided Ordinal Regression Neural Network for Segmentation and Classification of Nuclei in Large-Scale Multi-Tissue Histology Images. IEEE J. Biomed. Health Inform. 2022, 26, 3218–3228. [Google Scholar] [CrossRef]
  15. He, S.; Minn, K.T.; Solnica-Krezel, L.; Anastasio, M.A.; Li, H. Deeply-Supervised Density Regression for Automatic Cell Counting in Microscopy Images. Med. Image Anal. 2021, 68, 101892. [Google Scholar] [CrossRef] [PubMed]
  16. Zhu, X.; Chen, F.; Zhang, X.; Zheng, Y.; Peng, X.; Chen, C. Detection the maturity of multi-cultivar olive fruit in orchard environments based on Olive-EfficientDet. Sci. Hortic. 2024, 324, 112607. [Google Scholar] [CrossRef]
  17. Zhu, X.; Chen, F.; Zheng, Y.; Peng, X.; Chen, C. An efficient method for detecting Camellia oleifera fruit under complex orchard environment. Sci. Hortic. 2024, 330, 113091. [Google Scholar] [CrossRef]
  18. Zhao, Z.; Zhao, J.; Song, K.; Hussain, A.; Du, Q.; Dong, Y.; Liu, J.; Yang, X. Joint DBN and Fuzzy C-Means Unsupervised Deep Clustering for Lung Cancer Patient Stratification. Eng. Appl. Artif. Intell. 2020, 91, 103571. [Google Scholar] [CrossRef]
  19. Lv, X.; Zhang, S.; Liu, Q.; Xie, H.; Zhong, B.; Zhou, H. BacklitNet: A Dataset and Network for Backlit Image Enhancement. Comput. Vis. Image Underst. 2022, 218, 103403. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Zhang, S.; Wu, R.; Zuo, W.; Timofte, R.; Xing, X.; Park, H.; Song, S.; Kim, C.; Kong, X. NTIRE 2024 Challenge on Bracketing Image Restoration and Enhancement: Datasets Methods and Results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, DC, USA, 17–21 June 2024; pp. 6153–6166. [Google Scholar]
  21. Fu, K.; Lei, T.; Halubok, M.; Bailey, B.N. Walnut Detection Through Deep Learning Enhanced by Multispectral Synthetic Images. arXiv 2023, arXiv:2401.03331. [Google Scholar]
  22. Rong, D.; Xie, L.; Ying, Y. Computer Vision Detection of Foreign Objects in Walnuts Using Deep Learning. Comput. Electron. Agric. 2019, 162, 1001–1010. [Google Scholar] [CrossRef]
  23. Niu, C.; Zhang, J.; Wang, G.; Liang, J. GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering. In Computer Vision–ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12370, pp. 735–751. ISBN 978-3-030-58594-5. [Google Scholar]
  24. Wang, Y.; Chang, D.; Fu, Z.; Zhao, Y. Learning a Bi-Directional Discriminative Representation for Deep Clustering. Pattern Recognit. 2023, 137, 109237. [Google Scholar] [CrossRef]
  25. Chen, D.; Lv, J.; Zhang, Y. Unsupervised Multi-Manifold Clustering by Learning Deep Representation. In Proceedings of the Workshops at the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  26. Xu, X.; Ding, S.; Wang, Y.; Wang, L.; Jia, W. A Fast Density Peaks Clustering Algorithm with Sparse Search. Inf. Sci. 2021, 554, 61–83. [Google Scholar] [CrossRef]
  27. Suhartono, D.; Gema, A.P.; Winton, S.; David, T.; Fanany, M.I.; Arymurthy, A.M. Hierarchical Attention Network with XGBoost for Recognizing Insufficiently Supported Argument. In Multi-Disciplinary Trends in Artificial Intelligence; Phon-Amnuaisuk, S., Ang, S.-P., Lee, S.-Y., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10607, pp. 174–188. ISBN 978-3-319-69455-9. [Google Scholar]
  28. Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A Survey and Performance Evaluation of Deep Learning Methods for Small Object Detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
  29. Zhang, K.; Zhu, D.; Li, J.; Gao, X.; Gao, F.; Lu, J. Learning Stacking Regression for No-Reference Super-Resolution Image Quality Assessment. Signal Process. 2021, 178, 107771. [Google Scholar] [CrossRef]
  30. Koonce, B. ResNet 34. In Convolutional Neural Networks with Swift for Tensorflow; Apress: Berkeley, CA, USA, 2021; pp. 51–61. ISBN 978-1-4842-6167-5. [Google Scholar]
  31. Almoosawi, N.M.; Khudeyer, R.S. ResNet-34/DR: A Residual Convolutional Neural Network for the Diagnosis of Diabetic Retinopathy. Informatica 2021, 45, 7. [Google Scholar] [CrossRef]
  32. Zhang, H.; Zu, K.; Lu, J.; Zou, Y.; Meng, D. EPSANet: An Efficient Pyramid Squeeze Attention Block on Convolutional Neural Network. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 1161–1177. [Google Scholar]
  33. Zhang, S. Deep Neural Network Compression with Filter Pruning; Lancaster University: Bailrigg, Lancaster, UK, 2023. [Google Scholar]
  34. Pan, X.; Ge, C.; Lu, R.; Song, S.; Chen, G.; Huang, Z.; Huang, G. On the Integration of Self-Attention and Convolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 815–825. [Google Scholar]
  35. Chen, X.W.; Nof, S.Y. Error Detection and Prediction Algorithms: Application in Robotics. J. Intell. Robot. Syst. 2007, 48, 225–252. [Google Scholar] [CrossRef]
  36. Chicco, D.; Warrens, M.J.; Jurman, G. The Coefficient of Determination R-Squared Is More Informative than SMAPE, MAE, MAPE, MSE and RMSE in Regression Analysis Evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  37. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  38. Yang, H.; Ni, J.; Gao, J.; Han, Z.; Luan, T. A Novel Method for Peanut Variety Identification and Classification by Improved VGG16. Sci. Rep. 2021, 11, 15756. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of walnut image.
Figure 1. Examples of walnut image.
Agriculture 14 01422 g001
Figure 2. Example of walnut maturity detection dataset.
Figure 2. Example of walnut maturity detection dataset.
Agriculture 14 01422 g002
Figure 3. Walnut oil content prediction dataset.
Figure 3. Walnut oil content prediction dataset.
Agriculture 14 01422 g003
Figure 4. GATCluster model clustering process. GP is the global pooling layer, FC is the fully connected layer, and Conv is the convolutional layer. GATCluster attention feature module.
Figure 4. GATCluster model clustering process. GP is the global pooling layer, FC is the fully connected layer, and Conv is the convolutional layer. GATCluster attention feature module.
Agriculture 14 01422 g004
Figure 5. Attention feature module. Mul is the channel independent multiplication. “K(u, Φ)” is the Gaussian kernel function.
Figure 5. Attention feature module. Mul is the channel independent multiplication. “K(u, Φ)” is the Gaussian kernel function.
Agriculture 14 01422 g005
Figure 6. SENet module workflow diagram.
Figure 6. SENet module workflow diagram.
Agriculture 14 01422 g006
Figure 7. Convolution operation flow chart.
Figure 7. Convolution operation flow chart.
Agriculture 14 01422 g007
Figure 8. Flow chart of self-attention mechanism.
Figure 8. Flow chart of self-attention mechanism.
Agriculture 14 01422 g008
Figure 9. Maturity clustering results of GATCluster algorithm.
Figure 9. Maturity clustering results of GATCluster algorithm.
Agriculture 14 01422 g009
Figure 10. Results of walnut maturity detection.
Figure 10. Results of walnut maturity detection.
Agriculture 14 01422 g010
Figure 11. Detection results of the oil content prediction model for walnut fruits. Real values are in red and predicted values are in blue.
Figure 11. Detection results of the oil content prediction model for walnut fruits. Real values are in red and predicted values are in blue.
Agriculture 14 01422 g011
Figure 12. Confusion matrix diagram of walnut maturity detection.
Figure 12. Confusion matrix diagram of walnut maturity detection.
Agriculture 14 01422 g012
Table 1. Computer hardware configuration parameters.
Table 1. Computer hardware configuration parameters.
Device NameParameters
CPUIntel(R) Core(TM) i9-10900KF CPU @ 3.7 GHz
RAM16 G
GPUNVIDIA GeForce RTX 3080 16 G
Solid state drive1 T
Mechanical drive1 T
Table 2. Maturity classification and phenotypic characteristics.
Table 2. Maturity classification and phenotypic characteristics.
Maturity LevelPhenotypic FeaturesExamples
Maturity iDark green or green rind colorAgriculture 14 01422 i001
Maturity iiPeel color yellow-green, cracked at or around the top of the fruitAgriculture 14 01422 i002
Maturity iiiGreen rind cracked, nuts exposedAgriculture 14 01422 i003
Table 3. Nutrient content of each maturity level.
Table 3. Nutrient content of each maturity level.
PropertiesMaturity Level
Maturity iMaturity iiMaturity iii
Oil content(58.574 ± 0.89) c(65.578 ± 1.687) a(64.727 ± 1.735) b
Water content(29.246 ± 0.370) a(19.928 ± 4.615.) b(18.272 ± 2.781) c
Protein content(21.8154 ± 2.467) a(18.0 ± 1.23) b(16.426 ± 0.404) c
Soluble sugar content(5.385 ± 0.979) a(3.545 ± 0.846) b(3.037 ± 1.077) c
Weight(44.350 ± 1.167) c(50.390 ± 7.135) b(57.386 ± 0.834) a
Transverse diameter(41.689 ± 5.097) a(43.149 ± 4.069) a(44.422 ± 4.811) a
Longitudinal diameter(47.541 ± 4.763) a(48.704 ± 1.159) a(49.192 ± 1.657) a
The data in Table 3 are “mean ± standard deviation”. Different lowercase letters after the data in the same column indicate significant differences (p < 0.05). A different letter at the end of the data represents a significant difference, while the same letter represents a non-significant difference.
Table 4. Quantitative evaluation results of walnut maturity detection.
Table 4. Quantitative evaluation results of walnut maturity detection.
Maturity LevelEvaluation Indicators
PrecRecF1-ScoreAcc
Maturity i92.18%92%92.09%88.33%
Maturity ii82.69%86%84.31%
Maturity iii90.44%87%88.69%
Table 5. Prediction score of oil content of the improved ResNet34.
Table 5. Prediction score of oil content of the improved ResNet34.
Evaluation IndicatorsResult
MAPE0.103
RMSE2.96
R20.8822
Table 6. Prediction score of oil content of each model.
Table 6. Prediction score of oil content of each model.
ModelsRMSEMAPER2
ResNet348.5760.1860.62
ResNet34 + SeNet4.9890.1440.754
ResNet34 + Acmix5.9650.1620.696
ResNet34 + SeNet + Acmix2.9600.1030.8822
Table 7. Comparison test evaluation index score.
Table 7. Comparison test evaluation index score.
ModelsRMSEMAPER2
ResNet189.2340.2370.6473
ResNet5012.8240.3780.5357
Vgg169.5870.2030.6589
Vgg197.2360.2180.689
ResNet34 + SENet + Acmix2.960.1030.8822
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, P.; Chen, F.; Zhu, X.; Yu, Y.; Lin, J. Phenotypic-Based Maturity Detection and Oil Content Prediction in Xiangling Walnuts. Agriculture 2024, 14, 1422. https://doi.org/10.3390/agriculture14081422

AMA Style

Guo P, Chen F, Zhu X, Yu Y, Lin J. Phenotypic-Based Maturity Detection and Oil Content Prediction in Xiangling Walnuts. Agriculture. 2024; 14(8):1422. https://doi.org/10.3390/agriculture14081422

Chicago/Turabian Style

Guo, Puyi, Fengjun Chen, Xueyan Zhu, Yue Yu, and Jianhui Lin. 2024. "Phenotypic-Based Maturity Detection and Oil Content Prediction in Xiangling Walnuts" Agriculture 14, no. 8: 1422. https://doi.org/10.3390/agriculture14081422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop