Next Article in Journal
Advertising Decisions of Platform Supply Chains Considering Network Externalities and Fairness Concerns
Next Article in Special Issue
Selecting the Fintech Strategy for Supply Chain Finance: A Hybrid Decision Approach for Banks
Previous Article in Journal
Magnetic Impact on the Unsteady Separated Stagnation-Point Flow of Hybrid Nanofluid with Viscous Dissipation and Joule Heating
Previous Article in Special Issue
Identifying Key Risk Factors in Product Development Projects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Hyperparameter Tuning Recurrent Neural Network Model for Fruit Classification

by
Kathiresan Shankar
1,
Sachin Kumar
1,*,
Ashit Kumar Dutta
2,
Ahmed Alkhayyat
3,
Anwar Ja’afar Mohamad Jawad
4,
Ali Hashim Abbas
5 and
Yousif K. Yousif
6
1
Big Data and Machine Learning Lab, South Ural State University, 454080 Chelyabinsk, Russia
2
Department of Computer Science and Information System, College of Applied Sciences, AlMaarefa University, Riyadh 11597, Saudi Arabia
3
College of Technical Engineering, The Islamic University, Najaf 61001, Iraq
4
Department of Computer Techniques Engineering, Al-Rafidain University College, Baghdad 10064, Iraq
5
College of Information Technology, Imam Ja’afar Al-Sadiq University, Al-Muthanna 66002, Iraq
6
Department of Computer Technical Engineering, Al-Hadba University College, Mosul 41001, Iraq
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2358; https://doi.org/10.3390/math10132358
Submission received: 23 May 2022 / Revised: 25 June 2022 / Accepted: 28 June 2022 / Published: 5 July 2022
(This article belongs to the Special Issue Decision Making and Its Applications)

Abstract

:
Automated fruit classification is a stimulating problem in the fruit growing and retail industrial chain as it assists fruit growers and supermarket owners to recognize variety of fruits and the status of the container or stock to increase business profit and production efficacy. As a result, intelligent systems using machine learning and computer vision approaches were explored for ripeness grading, fruit defect categorization, and identification over the last few years. Recently, deep learning (DL) methods for classifying fruits led to promising performance that effectively extracts the feature and carries out an end-to-end image classification. This paper introduces an Automated Fruit Classification using Hyperparameter Optimized Deep Transfer Learning (AFC-HPODTL) model. The presented AFC-HPODTL model employs contrast enhancement as a pre-processing step which helps to enhance the quality of images. For feature extraction, the Adam optimizer with deep transfer learning-based DenseNet169 model is used in which the Adam optimizer fine-tunes the initial values of the DenseNet169 model. Moreover, a recurrent neural network (RNN) model is utilized for the identification and classification of fruits. At last, the Aquila optimization algorithm (AOA) is exploited for optimal hyperparameter tuning of the RNN model in such a way that the classification performance gets improved. The design of Adam optimizer and AOA-based hyperparameter optimizers for DenseNet and RNN models show the novelty of the work. The performance validation of the presented AFC-HPODTL model is carried out utilizing a benchmark dataset and the outcomes report the promising performance over its recent state-of-the-art approaches.

1. Introduction

Automatic fruit classification is an intriguing challenge in the growth of fruit and retailing industrial chain since it is helpful for the fruit producers and supermarkets to discover various fruits and their condition from the containers or stock with a view to improvising manufacturing effectiveness and revenue of the business [1]. Thus, intelligent systems making use of machine learning (ML) approaches and computer vision (CV) have been applied to fruit defect recognition, ripeness grading, and classification in the last decade [2]. In automated fruit classification, two main methods, one conventional CV-related methodologies and the other one deep learning (DL)-related methodologies, were investigated. The conventional CV-oriented methodologies initially derive the low-level features, after which they execute image classification through the conventional ML approaches, while the DL-related techniques derive the features efficiently and execute an endwise image classification [3]. In the conventional image processing and CV approaches, imagery features, such as shape, texture, and color, were utilized as input unit for fruit classification.
Previously, fruit processing and choosing depended on artificial techniques, leading to a huge volume of waste of labor [4]. Nonetheless, the above-mentioned techniques require costly devices (various kinds of sensors) and professional operators, and their comprehensive preciseness is typically less than 85% [5]. With the speedy advancement of 4G communication and extensive familiarity with several mobile Internet gadgets, individuals have created a large number of videos, sounds, images, and other data, and image identification technology has slowly matured [6].
Image-related fruit recognition has gained the interest of authors because of its inexpensive gadgets and extraordinary performances [7]. At the same time, it is needed to design automated tools capable of handling unplanned scenarios such as accidental mixing of fresh products, fruit placement in unusual packaging, different lighting conditions or spider webs on the lens, etc. Such situations may also cause uncertainty in the model results. The intelligent recognition of fruit might be utilized not only from the picking stages of the prior fruit but also in the processing and picking phase in the next stage [8]. Fruit identification technology depending on DL could substantially enhance the execution of fruit identification and comprises a positive impact on fostering the advancement of smart agriculture. In comparison with artificial features and conventional ML combination techniques, DL may derive features automatically, and contains superior outcomes that slowly emerged as the general methodology of smart recognition [9]. Particularly, convolutional neural network (CNN) is one of the vital DL models utilized for image processing. It is a type of artificial neural network (ANN) which utilizes convolution operation in at least one of the layers. Recently, CNNs have received significant attention on the image classification process. Specifically, in the agricultural sector, CNN-based approaches have been utilized for fruit classification and fruit detection [10].
This paper introduces an Automated Fruit Classification using Hyperparameter Optimized Deep Transfer Learning (AFC-HPODTL) model. The presented AFC-HPODTL model employs contrast enhancement as a pre-processing step which helps to improve the quality of the image. Next is the Adam optimizer with deep transfer learning-based DenseNet169 model. Moreover, the Aquila optimization algorithm (AOA) with recurrent neural network (RNN) model is utilized for the identification and classification of fruits. The performance validation of the presented AFC-HPODTL model is carried out using a benchmark dataset and examines the results under different aspects. In summary, the contribution of the paper is as follows:
  • An intelligent AFC-HPODTL model comprising of pre-processing, Adam with DenseNet169-based feature extraction, RNN classification, and AOA-based hyperparameter tuning is presented. To the best of our knowledge, the AFC-HPODTL model has never been presented in the literature.
  • Hyperparameter tuning of the DenseNet169 and RNN models takes place using Adam optimizer and AOA techniques respectively, which in turn considerably enhances the fruit classification performance shows the novelty of the work.
  • The performance of the proposed AFC-HPODTL model is validated on two open databases and the results demonstrate the better performance over other DL models.
The rest of the paper is organized as follows. Section 2 offers a detailed literature review of existing fruit classification models. Next, Section 3 introduces the proposed AFC-HPODTL model and Section 4 provides the experimental result analysis. Finally, Section 5 concludes the study.

2. Related Works

In [11], the authors suggest an effective structure for fruit classification with the help of DL. Most importantly, the structure depends on two distinct DL architectures. One is a proposed light model of six CNN layers, and the other is a fine-tuned visual geometry group-16 pretrained DL method. Rojas-Aranda et al. [12] provide an image classification technique, based on lightweight CNN, for the purpose of fastening the checking procedure in the shops. A novel images dataset has presented three types of fruits, without or with plastic bags. These input units are the RGB histogram, the RGB centroid acquired from K-means clustering, and single RGB color. In [13], the researchers suggested a new fruit classification method that uses Long Short-Term Memory (LSTM), RNN structures, and CNN features. Type-II fuzzy advancement was further utilized as a preprocessing device for advancing the images. Furthermore, TLBO-MCET was used to tune the hyperparameters of the suggested method.
In [14], the researchers advanced a hybrid DL-related fruit image classification structure called attention-related densely connected convolution network with convolution auto-encoder (CAE-ADN), that employs a CAE for pretraining the images and leverages an attention-related DenseNet for extracting the image features. In the opening portion of the structure, an unsupervised technique with a group of images is applied to pretrain the greedy layer-wised CAE. In the next portion of the structure, the supervised ADN with the ground truth is applied. The structure’s last portion performs an estimation of the classes of fruits. Kumari and Gomathy [15] recommend a classical method that utilizes texture features and color for fruit classification. The conventional fruit classification technique is reliable upon manual function on the basis of visual ability. The classification can be performed with the help of the Support Vector Machine (SVM) classifier depending on co-occurrence and statistical features extracted from the wavelet transform.
In [16], a 13-layer CNN was devised. Three categories of data augmentation methods are employed: noise injection, image rotation, and Gamma correction. The researchers made a comparison of average pooling and max pooling. The stochastic gradient descent with momentum is utilized for training the CNN with a minibatch size of 128. In [17], a fruit image classification technique based on lightweight neural network MobileNetV2 and transfer learning (TL) method is employed for recognizing fruit images. They leveraged MobileNetV2 network pretrained by ImageNet dataset as a base system after replacing the topmost layer of the base system with a Softmax classifier and conventional convolution layer. They applied dropout to the newly added conv2d simultaneously for diminishing overfitting. The pretrained MobileNetV2 is utilized for extracting features and the Softmax classifier is utilized for classifying features.
The researchers in [18] provide an extensive review of the hyperparameter tuning of CNN models by the use of nature-inspired algorithms. It provides an overview of various CNN approaches utilized for image classification, segmentation, and styling. Next, in [19], the mathematical relationship between four hyperparameters, namely learning rate, batch size, dropout rate, and convolution kernel size were investigated in detail. A generalized multi-parameter mathematical correlation approach was derived, showing that the hyperparameters play a vital part in the efficiency of the NN models. Guo et al. [20] introduced a distributed particle swarm optimization (DPSO) algorithm for hyperparameter tuning of the CNN models. On comparing with the complex, with manual designs based on historical experience and personal preference, the DPSO algorithm effectually chooses the hyperparameters of the CNN model. In addition, the DPSO algorithm has shown significant improvement over the conventional PSO algorithm.
Several fruit classification models exist in the literature. Despite the development of the ML and DL models in previous works, it is still necessary to boost the fruit classification performance. Due to the continual deepening of the model, the number of parameters of DL models gets increased rapidly, resulting in model overfitting. Moreover, different hyperparameters have a significant impact on the efficiency of the CNN model. Particularly, the hyperparameters such as epoch count, batch size, and learning rate selection are important to achieve effective results. As the trial and error method for hyperparameter tuning is a tiresome and erroneous process, metaheuristic algorithms can be applied. Therefore, in this work, we employ the Adam optimizer and AOA algorithm for the parameter selection of the DenseNet169 and RNN models, respectively.

3. The Proposed Model

In this study, a new AFC-HPODTL model was developed for the automatic identification and classification of fruits. The presented AFC-HPODTL model comprises a series of processes namely pre-processing, DenseNet169 feature extraction, Adam optimizer, RNN classification, and AOA hyperparameter optimization. Figure 1 illustrates the overall process of the AFC-HPODTL algorithm.

3.1. Contrast Enhancement

Initially, the presented AFC-HPODTL model employs contrast enhancement as a pre-processing step which helps for improving the quality of the image. CLAHE is different from AHE as it gets care of over-amplification of contrasts. CLAHE functions on smaller areas from the image named tiles, before the total images. The adjacent tiles are then integrated utilizing bilinear interpolation for removing the artificial boundary. This technique is executed for improving the contrast of images.

3.2. Feature Extraction

To extract feature vectors from the pre-processed fruit images, the DenseNet169 model is employed. The CNN structures have two bases, namely the convolution and classification bases. The convolution base contains three important kinds of layers, namely the convolution, activation, and pooling layers [21]. These kinds of layers were utilized for discovering the fundamental features of input images that are named feature maps (FM). The FM was obtained by applying convolutional procedures to input images or prior features utilizing linear filtering, and integration of a bias term. Afterward, the passing of this FM was achieved with a nonlinear activation function such as Sigmoid and ReLU. Conversely, the classification base comprised the dense layer integrated with activation layer for converting the FMs to 1D vector for expediting the classifier task utilizing several neurons. Generally, more than one dropout layer is used along with the classification base to minimize the overfitting which encounters CNN structures and enhances its generalized nature. Adding a dropout layer to the classification base establishes a novel hyperparameter named dropout rate. Usually, the dropout rate is fixed in the range of 0.1–0.9.
DenseNet is the most novel addition to the NNs utilized for detection of visual objects. DenseNet169 is a process of the DenseNet group [22]. The DenseNet group is planned for the executing image classifier. DenseNet169 is superior to the rest of the DenseNet group. Typically, in DenseNet, every image is being trained. An ImageNet image database, however, can be trained by the method and stored and tested by loading our saved method rather than ImageNet. At this point, the results of the earlier layer is obtained concatenated with the future layer DenseNet. DenseNet has been shown to reduce the accuracy from a higher level NN that is produced by vanishing gradients, while there is a longer path that exists amongst the input as well as output layers and the data obtained vanish even before attaining its target. The DenseNet goes to type of typical network. Based on the new stats, a convolution layer is more effective and accurate when it can be shorter and linked among layers nearby, the input and closer output. At this point, the DenseNet was employed for connecting all the layers that are in feed-forward fashion. Generally, a classical convolution network has L layers. Moreover, L linking exists among the L layer. That represents one link among all the layers and their following layers.
It takes L (L + 1)/2 direct connections from the networks. For all the layers as input, every presiding layer is utilized. In order to input every following layer, their FM is being utilized. Several benefits can be obtained from DenseNet. It can decrease the vanishing gradient problems. The feature propagation is strengthened, feature reprocess is encouraged, and it decreases the number of parameters. The presented structure is estimated on extremely competitive image detection benchmark ImageNet and it also utilizes the saved and load function. The combination of layers was feasible as defined by when there is an entire similarity from the FM dimensional at the time of concatenation or addition. DenseNet is divided into DenseBlocks with a different number of filters, but within the blocks, the dimensional is similar. The Batch normalization (BN) was executed by utilizing down-sampling with transition layer. That is assumed to be a vital stage by the CNN. Based on the improvement of dimensional of the channels, the number among the DenseBlocks of filter variations, The rate of growth is represented by K . It plays an important role in generalizing I t h layer. The count of data which is further required from all the layers are being measured by:
k l = k 0 + k l 1
Here, the Adam optimizer fine-tunes the initial values of the DenseNet169 model. We employ ADAM, which is an optimization approach, as a substitute for traditional stochastic gradient descent algorithm for updating the network weight in training dataset [23]. This is utilized for performing optimization. ADAM is derived from adagrad and it is the more adaptable technique. ADAGRAD and momentum are collectively called ADAM.
Variables w t and L t , where index t specifies the present trainable iteration, the parameter update in ADAM is shown as follows:
m w t + 1 β 1 m w t + 1 β 1 w L t
v w t + 1 β 2 v w t + 1 β 2 ( w L t ) 2
m ^ w = m w t + 1 1 ( β 1 ) t + 1  
V ^ w = v w t + 1 1 ( β 2 ) t + 1  
w t + 1 w t η m ^ v ^ w +
From the expression, β 1 and β 2 denotes gradient forgetting factor and second moment of gradient. i s a small scalar utilized for preventing division by 0.

3.3. Fruit Classification

In the final stage, the RNN model is utilized for the identification and classification of fruits. The presented technique makes use of the LSTM model, which is a special kind of RNN. In the RNN, the neurons are interconnected with one another through a directed cycle [24]. The RNN model processes the data sequentially since it utilizes internal memory for processing a series of inputs or words. RNN implements a similar task to all the elements since the output is dependent on each preceding node input and remembered data. Figure 2 depicts the structure of RNN. For additional processing, Equation (7) characterizes typical RNN structure where h t indicates the novel state at time t ,   f w denotes a function with w variable, h t 1 represents an older state (preceding state), and x t signifies input vector at t time.
h t = f w h t 1 , x t
We alternate the Equations (7) and (8) viz., utilized to assign weight.
h t = t a n h W h h h 1 + W x h x t
Given that, the activation function is denoted as t a n h , the weight of hidden state is represented by w h , and the input vector can be signified as x t . The exploding vanishing or gradient problems are generated while learning of gradient model is back-propagated by using the network. A special kind of RNN model called LSTM is utilized for handling the gradient vanishing problems. The LSTM saves long-term dependency by utilizing three diverse gates in an efficient manner. The LSTM gate is explained in the following expression.
I n p u t   G a t e   I n t = σ W i n . h s t 1 , x t + b i n
M e m o r y   C e l l C t =   tan h   W c . h s t 1 , x t + b e
F o r g e t   G a t e   f t = σ W f . h s t 1 , x t + b f
O u t p u t   G a t e   f o = σ W o . h s t 1 , x t + b
From the formula, b characterizes the bias vector, W is utilized for weighted, and x t indicates the input vector at t time, whereas, i , f , and c represent input, forget, cell memory, and output gates.

3.4. Hyperparameter Tuning

In this study, the AOA is exploited to tune the hyperparameters of the RNN model such as learning rate, number of hidden layers, weight initialization, and decay rate. The AOA algorithm is a new modern swarm intelligence approach [25]. There are four hunting strategies of Aquila; for dissimilar types of prey, Aquila might flexibly change hunting strategy for diverse prey and later use their fast speed combined with claws and sturdy feet to attack the prey. The summary of mathematical expression is demonstrated in the following steps.
Step 1: Extended exploration ( X 1 ): higher soar using vertical stoop
Here, the Aquila flies higher above the ground level and widely explores the searching space, later a vertical dive is taken when the Aquila defines the prey region. Such behavior can be mathematically expressed as follows:
X 1 t + 1 = X b e s i t × 1 t T + X M t X b e s t t × r 1
X M t = 1 N i = 1 N X i t
From the equation, X b e s t t signifies the optimally obtained location, and X t represents the average location of each Aquila in present iteration. t and T indicate the existing iteration and the maximal amount of iterations, correspondingly N denotes the population size, and r 1 refers to an arbitrary integer that lies within the range of zero and one.
Step 2: Narrowed exploration ( X 2 ): contour flight with shorter glide attack
This is a popular hunting methodology for Aquila. It applies short gliding to attack the prey, afterward descending within the designated area and flying around the prey. The updated location is given in the following:
X 2 t + 1 = X b e s t t × L P D + X R t + y x × r 2
In Equation (15), X R t refers to an arbitrary location of Aquila, D indicates the dimension size, and r 2 represent an arbitrary integer lies in the range of 0 ,   1 . L F D signifies Levy flight function that is given in the following:
L P D = s × u × σ | v | 1 β
σ = Γ 1 + β × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2
From the expression, s and β are constant values equivalent to 0.01 and 1.5, correspondingly, and u and v stand for arbitrary numbers lying within a range [0, 1]. y and χ represent the spiral shape in the search space that is computed in the following:
x = r ×   sin   θ y = r ×   cos   θ r = r 3 + 0.00565 × D 1 θ = ω × D 1 + 3 × π 2
In Equation (18), r 3 is the number of search cycles within the interval of 1 and 20, D 1 is comprised of integer numbers from 1 to D dimensional size, then ω is equivalent to 0.005.
Step 3: Expanded exploitation X 3 : lower flight with a slower descent attacks
Here, once the prey region is commonly identified, the Aquila vertically descends to execute an initial attack. AOA uses the designated region to get closer and attack the prey. This behavior can be mathematically modeled by the following equation:
X 3 t + 1 = X b e s t t X M t × α r 4 + U B L B × r 5 + L B × δ
In Equation (19), X b e s i t represents the optimally attained location, and X M t indicates the average value of present position. α and δ signify the exploitation fine-tuning parameter set as 0.1, U B and L B denotes the upper and lower limits, and r 4 and r 5 refers to arbitrary value lies in the interval of 0 , 1 .
Step 4: Narrowed exploitation X 4 : grabbing and walking prey
Here, the Aquila chase the prey with regard to escape trajectory and later attack the prey on the ground. The arithmetical expression of the behavior is given below:
X 4 t + 1 = Q F × X b e s t t G 1 × X t × r 6 G 2 × L F D + r 7 × G 1 Q P t = t 2 × r a n d ( ) 1 ( 1 T ) 2 G 1 = 2 × r 8 1 G 2 = 2 × 1 t T
In Equation (20), X t indicates the present location, and Q F t characterizes the quality function value that balances the searching strategy. G 1 represents the movement parameter of Aquila during tracking prey, which is an arbitrary integer lying within the range of 1 ,   1 . G 2 signifies the flight slope while chasing prey that linearly reduces from 2 to 0. r 6 , r 7 , and r 8 are arbitrary numbers that lie within [0, 1].
The AOA system computes a fitness function (FF) for achieving higher classifier efficiency. It defines the positive integer for demonstrating a better performance of candidate outcomes. During this case, the minimized classifier error rate can be assumed as FF provided in Equation (21).
f i t n e s s x i = C l a s s i f i e r   E r r o r   R a t e x i = n u m b e r   o f   m i s c l a s s i f i e d   f r u i t   i m a g e s T o t a l   n u m b e r   o f   f r u i t   i m a g e s 100

4. Performance Validation

The experimental validation of the AFC-HPODTL model was tested using two datasets, namely dataset 1 [26] and dataset 2 [27]. The proposed AFC-HPODTL model is simulated using Python 3.6.5 tool on a PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD, and 1TB HDD. The parameter settings of the DenseNet model are given as follows: dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.

4.1. Result Analysis on Dataset 1

Dataset 1 (D1) is an openly accessible fruit and vegetable dataset that comprises 15 classes as shown in Table 1. All the classes involve at least 75 images, resultant from 2633 images in total. These images are gathered at a resolution of 1024 × 768 pixels on distinct dates and times. The dataset was freely accessible in [26]. A few sample images from dataset 1 are showcased in Figure 3.
Figure 4 demonstrates a set of confusion matrices created by the AFC-HPODTL model on the test dataset 1. The figure indicates that the AFC-HPODTL model has effectually categorized the images into 15 fruit classes under all datasets.
Table 2 reports the overall fruit classification results of the AFC-HPODTL model obtained under dataset 1. The results indicate that the AFC-HPODTL model obtained effective classification results on all datasets. For instance, with the entire dataset, the AFC-HPODTL model classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.85%, 98.90%, 98.84%, 98.85%, 98.78%, and 98.76% respectively. Afterward, with 70% of TR data, the AFC-HPODTL approach classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.85%, 98.95%, 98.88%, 98.80%, 98.83%, and 98.77%, correspondingly. Similarly, with 30% of the TS dataset, the AFC-HPODTL algorithm classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.84%, 98.72%, 98.77%, 98.70%, 98.64%, and 98.73%, correspondingly.
The training accuracy (TA) and validation accuracy (VA) attained by the AFC-HPODTL approach on dataset 1 is demonstrated in Figure 5. The experimental outcome shows that the AFC-HPODTL methodology gained maximal values of TA and VA. Specifically, the VA seemed to be higher than the TA.
The training loss (TL) and validation loss (VL) achieved by the AFC-HPODTL system on dataset 1 are established in Figure 6. The experimental outcome inferred that the AFC-HPODTL approach achieved the lowest values of TL and VL. Specifically, the VL seemed to be lower than TL.
Table 3 and Figure 7 provide a comprehensive comparison study of the AFC-HPODTL model with existing models [28] on dataset 1. The results show that the NASNetMobile and MobileNetV1 models showed worse fruit classification results. Following, the Inception v3 model gained a slightly increased classification outcome. Then, the DenseNet121, VGG-16, and MobileNetV2 models reported moderately closer classification results. However, the AFC-HPODTL model gained maximum performance with a c c u y , p r e c n , r e c a l , F 1 s c o r e , and kappa score of 99.84%, 98.72%, 98.77%, 98.70%, and 98.73%, respectively.

4.2. Result Analysis on Dataset 2

Dataset 2 (D2) is an Indian fruit dataset that involves 12 classes as illustrated in Table 4. This is a balanced dataset, whereas all the classes have 1000 images, resulting from 12,000 images in total. All the images were obtained with various lighting, angles, and background conditions. The dataset is openly accessible in [27]. A few sample images from dataset 2 are demonstrated in Figure 8.
Figure 9 depicts a set of confusion matrices created by the AFC-HPODTL approach on the test dataset 2. The figure shows that the AFC-HPODTL algorithm effectively categorized the images into 12 fruit classes in all datasets.
Table 5 demonstrates the overall fruit classification outcomes of the AFC-HPODTL approach obtained under dataset 2. The outcomes show that the AFC-HPODTL model obtained effectual classification outcomes on all datasets. For instance, with entire dataset, the AFC-HPODTL approach classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.63%, 97.79%, 97.78%, 97.78%, 97.58%, and 97.57%, correspondingly. Next, with 70% of TR data, the AFC-HPODTL algorithm classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.61%, 97.70%, 97.68%, 97.68%, 97.47%, and 97.47%, correspondingly. Similarly, with 30% of the TS dataset, the AFC-HPODTL methodology classified 15 classes with average a c c u y , p r e c n , r e c a l , F s c o r e , MCC, and kappa score of 99.67%, 97.99%, 98.02%, 98%, 97.82%, and 97.82%, correspondingly.
The TA and VA attained by the AFC-HPODTL approach on dataset 2 are demonstrated in Figure 10. The experimental outcome shows that the AFC-HPODTL methodology gained maximal values of TA and VA. Specifically, the VA appeared superior to the TA.
The TL and VL achieved by the AFC-HPODTL system on dataset 2 are established in Figure 11. The experimental outcome exposed that the AFC-HPODTL approach achieved the lowest values of TL and VL. Specifically, the VL seemed to be lesser than TL.
Table 6 and Figure 12 illustrate a comprehensive comparison analysis of the AFC-HPODTL algorithm with existing approaches on dataset 2 [28]. The outcomes demonstrate that the NASNetMobile and MobileNetV1 techniques demonstrated worse fruit classification results. The Inception v3 model gained somewhat superior classification outcomes. Likewise, the DenseNet121, VGG-16, and MobileNetV2 approaches reported moderately closer classification results. Eventually, the AFC-HPODTL system showed a higher performance with a c c u y , p r e c n , r e c a l , F 1 s c o r e , and kappa score of 99.67%, 97.99%, 98.02%, 98%, and 97.82%, correspondingly.
From the detailed results and discussion, it is apparent that the AFC-HPODTL model accomplished maximum fruit classification results over the other models.

5. Conclusions

In this study, a new AFC-HPODTL model was developed for the automatic identification and classification of fruits. The presented AFC-HPODTL model comprises a series of processes, namely pre-processing, DenseNet169 feature extraction, Adam optimizer, RNN classification, and AOA hyperparameter optimization. For feature extraction, the Adam optimizer with deep transfer learning-based DenseNet169 model is used and the AOA-RNN model is utilized for the classification of fruits. The performance validation of the presented AFC-HPODTL model was carried out using a benchmark dataset and the results reported promising performance over recent state-of-the-art approaches with maximum accuracy of 99.84% and 99.67% on datasets 1 and 2, respectively. The results demonstrated that the presented model has the effectual ability to classify fruits in real time. As a part of future scope, hybrid DL models can be integrated into the AFC-HPODTL model for enhanced classification performance. In addition, the presented model can be extended to the examination of fruit quality assessment in future. Moreover, the computational complexity of the proposed model can be examined in our future work.

Author Contributions

Conceptualization, S.K.; data curation, K.S.; formal analysis, K.S.; funding acquisition, A.A.; investigation, S.K.; methodology, K.S. and S.K.; project administration, A.K.D. and K.S.; resources, A.J.M.J. and Y.K.Y.; validation, A.H.A.; visualization, Y.K.Y.; writing—original draft, S.K. and K.S.; writing—review and editing, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the Ministry of Science and Higher Education of the Russian Federation (Government Order FENU-2020-0022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Altaheri, H.; Alsulaiman, M.; Muhammad, G. Date fruit classification for robotic harvesting in a natural environment using deep learning. IEEE Access 2019, 7, 117115–117133. [Google Scholar] [CrossRef]
  2. Chen, X.; Zhou, G.; Chen, A.; Pu, L.; Chen, W. The fruit classification algorithm based on the multi-optimization convolutional neural network. Multimed. Tools Appl. 2021, 80, 11313–11330. [Google Scholar] [CrossRef]
  3. Khan, R.; Debnath, R. Multi class fruit classification using efficient object detection and recognition techniques. Int. J. Image Graph. Signal Process. 2019, 11, 1. [Google Scholar] [CrossRef]
  4. Abdusalomov, A.; Mukhiddinov, M.; Djuraev, O.; Khamdamov, U.; Whangbo, T.K. Automatic Salient Object Extraction Based on Locally Adaptive Thresholding to Generate Tactile Graphics. Appl. Sci. 2020, 10, 3350. [Google Scholar] [CrossRef]
  5. Yoon, H.; Kim, B.H.; Mukhriddin, M.; Cho, J. Salient region extraction based on global contrast enhancement and saliency cut for image information recognition of the visually impaired. KSII Trans. Internet Inf. Syst. (TIIS) 2018, 12, 2287–2312. [Google Scholar]
  6. Nasir, I.M.; Bibi, A.; Shah, J.H.; Khan, M.A.; Sharif, M.; Iqbal, K.; Nam, Y.; Kadry, S. Deep learning-based classification of fruit diseases: An application for precision agriculture. CMC-Comput. Mater. Contin. 2021, 66, 1949–1962. [Google Scholar] [CrossRef]
  7. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A review of convolutional neural network applied to fruit image processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
  8. Macanhã, P.A.; Eler, D.M.; Garcia, R.E.; Junior, W.E.M. Handwritten feature descriptor methods applied to fruit classification. In Information Technology-New Generations; Springer: Berlin/Heidelberg, Germany, 2018; pp. 699–705. [Google Scholar]
  9. Siddiqi, R. Fruit-classification model resilience under adversarial attack. SN Appl. Sci. 2022, 4, 1–22. [Google Scholar] [CrossRef]
  10. Ukwuoma, C.C.; Zhiguang, Q.; Bin Heyat, M.B.; Ali, L.; Almaspoor, Z.; Monday, H.N. Recent Advancements in Fruit Detection and Classification Using Deep Learning Techniques. Math. Probl. Eng. 2022, 2022, 9210947. [Google Scholar] [CrossRef]
  11. Hossain, M.S.; Al-Hammadi, M.; Muhammad, G. Automatic fruit classification using deep learning for industrial applications. IEEE Trans. Ind. Inform. 2018, 15, 1027–1034. [Google Scholar] [CrossRef]
  12. Rojas-Aranda, J.L.; Nunez-Varela, J.I.; Cuevas-Tello, J.C.; Rangel-Ramirez, G. Fruit classification for retail stores using deep learning. In Mexican Conference on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2020; pp. 3–13. [Google Scholar]
  13. Gill, H.S.; Khehra, B.S. Hybrid classifier model for fruit classification. Multimed. Tools Appl. 2021, 80, 27495–27530. [Google Scholar] [CrossRef]
  14. Xue, G.; Liu, S.; Ma, Y. A hybrid deep learning-based fruit classification using attention model and convolution autoencoder. Complex Intell. Syst. 2020, 1–11. [Google Scholar] [CrossRef]
  15. Kumari, R.S.S.; Gomathy, V. Fruit classification using statistical features in svm classifier. In Proceedings of the 2018 4th International Conference on Electrical Energy Systems (ICEES), Chennai, India, 7–9 February 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 526–529. [Google Scholar]
  16. Zhang, Y.D.; Dong, Z.; Chen, X.; Jia, W.; Du, S.; Muhammad, K.; Wang, S.H. Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation. Multimed. Tools Appl. 2019, 78, 3613–3632. [Google Scholar] [CrossRef]
  17. Xiang, Q.; Wang, X.; Li, R.; Zhang, G.; Lai, J.; Hu, Q. Fruit image classification based on Mobilenetv2 with transfer learning technique. In Proceedings of the 3rd International Conference on Computer Science and Application Engineering, Sanya, China, 22–24 October 2019; pp. 1–7. [Google Scholar]
  18. Mohakud, R.; Dash, R. Survey on hyperparameter optimization using nature-inspired algorithm of deep convolution neural network. In Intelligent and Cloud Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 737–744. [Google Scholar]
  19. Shen, M.; Yang, J.; Li, S.; Zhang, A.; Bai, Q. Nonlinear Hyperparameter Optimization of a Neural Network in Image Processing for Micromachines. Micromachines 2021, 12, 1504. [Google Scholar] [CrossRef]
  20. Guo, Y.; Li, J.Y.; Zhan, Z.H. Efficient hyperparameter optimization for convolution neural networks in deep learning: A distributed particle swarm optimization approach. Cybern. Syst. 2020, 52, 36–57. [Google Scholar] [CrossRef]
  21. Ezzat, D.; Ella, H.A. GSA-DenseNet121-COVID-19: A hybrid deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization algorithm. arXiv 2020, arXiv:2004.05084. [Google Scholar]
  22. Lodhi, B.; Kang, J. Multipath-DenseNet: A Supervised ensemble architecture of densely connected convolutional networks. Inf. Sci. 2019, 482, 63–72. [Google Scholar] [CrossRef]
  23. Soydaner, D. A comparison of optimization algorithms for deep learning. Int. J. Pattern Recognit. Artif. Intell. 2020, 34, 2052013. [Google Scholar] [CrossRef]
  24. Rehman, A.U.; Malik, A.K.; Raza, B.; Ali, W. A hybrid CNN-LSTM model for improving accuracy of movie reviews sentiment analysis. Multimed. Tools Appl. 2019, 78, 26597–26613. [Google Scholar] [CrossRef]
  25. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  26. Rocha, A.; Hauagge, D.C.; Wainer, J.; Goldenstein, S. Automatic fruit and vegetable classification from images. Comput. Electron. Agriculture 2010, 70, 96–104. [Google Scholar] [CrossRef] [Green Version]
  27. Meshram, V.; Thanomliang, K.; Ruangkan, S.; Chumchu, P.; Patil, K. FruitsGB: Top Indian fruits with quality. IEEE Dataport 2020. [Google Scholar] [CrossRef]
  28. Shahi, T.B.; Sitaula, C.; Neupane, A.; Guo, W. Fruit classification using attention-based MobileNetV2 for industrial applications. PLoS ONE 2022, 17, e0264586. [Google Scholar] [CrossRef]
Figure 1. Overall process of AFC-HPODTL approach.
Figure 1. Overall process of AFC-HPODTL approach.
Mathematics 10 02358 g001
Figure 2. RNN architecture.
Figure 2. RNN architecture.
Mathematics 10 02358 g002
Figure 3. Sample images from dataset 1.
Figure 3. Sample images from dataset 1.
Mathematics 10 02358 g003
Figure 4. Confusion matrices of AFC-HPODTL approach on dataset1: (a) entire dataset, (b) 70% of TR data, and (c) 30% of TS data.
Figure 4. Confusion matrices of AFC-HPODTL approach on dataset1: (a) entire dataset, (b) 70% of TR data, and (c) 30% of TS data.
Mathematics 10 02358 g004
Figure 5. TA and VA analysis of AFC-HPODTL approach under dataset 1.
Figure 5. TA and VA analysis of AFC-HPODTL approach under dataset 1.
Mathematics 10 02358 g005
Figure 6. TL and VL analysis of AFC-HPODTL approach under dataset 1.
Figure 6. TL and VL analysis of AFC-HPODTL approach under dataset 1.
Mathematics 10 02358 g006
Figure 7. Comparative analysis of AFC-HPODTL approach under dataset 1.
Figure 7. Comparative analysis of AFC-HPODTL approach under dataset 1.
Mathematics 10 02358 g007
Figure 8. Sample images from dataset 2.
Figure 8. Sample images from dataset 2.
Mathematics 10 02358 g008
Figure 9. Confusion matrices of AFC-HPODTL approach on dataset 2: (a) entire dataset, (b) 70% of TR data, and (c) 30% of TS data.
Figure 9. Confusion matrices of AFC-HPODTL approach on dataset 2: (a) entire dataset, (b) 70% of TR data, and (c) 30% of TS data.
Mathematics 10 02358 g009
Figure 10. TA and VA analysis of AFC-HPODTL approach under dataset 2.
Figure 10. TA and VA analysis of AFC-HPODTL approach under dataset 2.
Mathematics 10 02358 g010
Figure 11. TL and VL analysis of AFC-HPODTL approach under dataset 2.
Figure 11. TL and VL analysis of AFC-HPODTL approach under dataset 2.
Mathematics 10 02358 g011
Figure 12. Comparative analysis of AFC-HPODTL approach on dataset 2.
Figure 12. Comparative analysis of AFC-HPODTL approach on dataset 2.
Mathematics 10 02358 g012
Table 1. Dataset-1 details.
Table 1. Dataset-1 details.
LabelsNameNo. of Instances
C1Agata potato75
C2Asterix potato75
C3Cashew75
C4Diamond peach75
C5Fuji apple75
C6Granny smith apple75
C7Honeydew melon75
C8Kiwi75
C9Nectarine75
C10Onion75
C11Orange75
C12Plum75
C13Spanish pear75
C14Tahiti lime75
C15Watermelon75
Total No. of Instances1125
Table 2. Result analysis of AFC-HPODTL approach under various measures on dataset 1.
Table 2. Result analysis of AFC-HPODTL approach under various measures on dataset 1.
LabelsAccuracyPrecisionRecallF-ScoreMCCKappa Score
Entire Dataset
C199.64100.0094.6797.2697.11-
C299.73100.0096.0097.9697.84-
C399.91100.0098.6799.3399.28-
C499.8297.40100.0098.6898.60-
C599.8298.6798.6798.6798.57-
C699.4792.59100.0096.1595.95-
C7100.00100.00100.00100.00100.00-
C899.73100.0096.0097.9697.84-
C999.9198.68100.0099.3499.29-
C1099.91100.0098.6799.3399.28-
C11100.00100.00100.00100.00100.00-
C1299.7396.15100.0098.0497.92-
C13100.00100.00100.00100.00100.00-
C14100.00100.00100.00100.00100.00-
C15100.00100.00100.00100.00100.00-
Average99.8598.9098.8498.8598.7898.76
Training Phase (70%)
C199.62100.0094.7497.3097.13-
C299.87100.0098.1199.0598.98-
C399.87100.0098.1599.0799.00-
C499.7596.36100.0098.1598.03-
C599.7598.1598.1598.1598.01-
C699.4993.44100.0096.6196.40-
C7100.00100.00100.00100.00100.00-
C899.75100.0095.9297.9297.81-
C9100.00100.00100.00100.00100.00-
C1099.87100.0098.1899.0899.02-
C11100.00100.00100.00100.00100.00-
C1299.7596.30100.0098.1198.00-
C13100.00100.00100.00100.00100.00-
C14100.00100.00100.00100.00100.00-
C15100.00100.00100.00100.00100.00-
Average99.8598.9598.8898.9098.8398.77
Testing Phase (30%)
C199.70100.0094.4497.1497.03-
C299.41100.0090.9195.2495.05-
C3100.00100.00100.00100.00100.00-
C4100.00100.00100.00100.00100.00-
C5100.00100.00100.00100.00100.00-
C699.4190.00100.0094.7494.57-
C7100.00100.00100.00100.00100.00-
C899.70100.0096.1598.0497.90-
C999.7095.00100.0097.4497.32-
C10100.00100.00100.00100.00100.00-
C11100.00100.00100.00100.00100.00-
C1299.7095.83100.0097.8797.74-
C13100.00100.00100.00100.00100.00-
C14100.00100.00100.00100.00100.00-
C15100.00100.00100.00100.00100.00-
Average99.8498.7298.7798.7098.6498.73
Table 3. Comparative analysis of AFC-HPODTL approach with existing algorithms on dataset 1 [28].
Table 3. Comparative analysis of AFC-HPODTL approach with existing algorithms on dataset 1 [28].
MethodsAccuracyPrecisionRecallF1-ScoreKappa-Score
DenseNet12195.1093.7495.0394.2294.47
NASNetMobile85.9888.5887.2485.9685.95
VGG-1695.4494.6994.4094.4294.91
MobileNetV186.8388.1186.4585.8786.40
InceptionV390.3590.7189.4589.3288.55
MobileNetV296.0795.4495.7595.7995.22
AFC-HPODTL99.8498.7298.7798.7098.73
Table 4. Dataset 2 details.
Table 4. Dataset 2 details.
LabelsNameNo. of Instances
C1Bad apple1000
C2Good apple1000
C3Bad banana1000
C4Good banana1000
C5Bad guava1000
C6Good guava1000
C7Bad lime1000
C8Good lime1000
C9Bad orange1000
C10Good orange1000
C11Bad pomegranate1000
C12Good pomegranate1000
Total No. of Instances12,000
Table 5. Result analysis of AFC-HPODTL approach under various measures on dataset 2.
Table 5. Result analysis of AFC-HPODTL approach under various measures on dataset 2.
LabelsAccuracyPrecisionRecallF-ScoreMCCKappa Score
Entire Dataset
C199.6598.4897.3097.8997.70-
C299.6898.5897.5098.0497.86-
C399.6798.1997.8098.0097.81-
C499.5195.8198.4097.0996.83-
C599.6197.6097.7097.6597.44-
C699.4295.6897.5096.5896.27-
C799.6197.7997.5097.6597.43-
C899.5898.5796.4097.4797.25-
C999.7297.6399.0098.3198.16-
C1099.7498.8998.0098.4498.30-
C1199.6897.8198.3098.0597.88-
C1299.6998.3997.9098.1597.98-
Average99.6397.7997.7897.7897.5897.57
Training Phase (70%)
C199.6298.4096.9897.6897.48-
C299.6898.6697.3698.0197.84-
C399.5897.8497.1397.4897.26-
C499.5796.7698.3597.5597.32-
C599.6897.6898.4098.0497.86-
C699.3595.2697.0296.1395.78-
C799.5597.7396.9097.3197.07-
C899.6298.6696.6597.6497.44-
C999.6997.0999.2998.1898.01-
C1099.7398.8697.8998.3798.22-
C1199.6497.4298.2797.8497.65-
C1299.6598.0097.8697.9397.74-
Average99.6197.7097.6897.6897.4797.47
Testing Phase (30%)
C199.7298.6898.0398.3598.20-
C299.6798.4197.7998.1097.92-
C399.8699.0199.3499.1799.10-
C499.3693.3698.5295.8795.57-
C599.4497.4296.1896.7996.49-
C699.6196.6998.6597.6697.45-
C799.7597.9598.9798.4698.32-
C899.5098.3795.8697.1096.83-
C999.7898.9898.3198.6498.52-
C1099.7898.9698.2898.6298.50-
C1199.7598.6998.3798.5398.40-
C1299.7899.3298.0098.6698.54-
Average99.6797.9998.0298.0097.8297.82
Table 6. Comparative analysis of AFC-HPODTL approach with existing algorithms on dataset 2 [28].
Table 6. Comparative analysis of AFC-HPODTL approach with existing algorithms on dataset 2 [28].
MethodsAccuracyPrecisionRecallF1-ScoreKappa-Score
DenseNet12195.5695.8996.4195.6496.04
NASNetMobile93.8993.4094.0793.0393.04
VGG-1695.7497.0295.7496.7996.51
MobileNetV186.2987.8087.1387.0685.46
InceptionV394.9195.1796.1995.4495.90
MobileNetV296.2096.4696.8996.9896.03
AFC-HPODTL99.6797.9998.0298.0097.82
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shankar, K.; Kumar, S.; Dutta, A.K.; Alkhayyat, A.; Jawad, A.J.M.; Abbas, A.H.; Yousif, Y.K. An Automated Hyperparameter Tuning Recurrent Neural Network Model for Fruit Classification. Mathematics 2022, 10, 2358. https://doi.org/10.3390/math10132358

AMA Style

Shankar K, Kumar S, Dutta AK, Alkhayyat A, Jawad AJM, Abbas AH, Yousif YK. An Automated Hyperparameter Tuning Recurrent Neural Network Model for Fruit Classification. Mathematics. 2022; 10(13):2358. https://doi.org/10.3390/math10132358

Chicago/Turabian Style

Shankar, Kathiresan, Sachin Kumar, Ashit Kumar Dutta, Ahmed Alkhayyat, Anwar Ja’afar Mohamad Jawad, Ali Hashim Abbas, and Yousif K. Yousif. 2022. "An Automated Hyperparameter Tuning Recurrent Neural Network Model for Fruit Classification" Mathematics 10, no. 13: 2358. https://doi.org/10.3390/math10132358

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop