Next Article in Journal
A Review of the Dynamics Progress of Bubble Collapse within Droplet and Droplet Splash
Previous Article in Journal
Target Selection Strategies for Demucs-Based Speech Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Date Fruit Classification Based on Surface Quality Using Convolutional Neural Network Models

Computer Science Department, College of Computer Sciences and Information Technology (CCSIT), King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(13), 7821; https://doi.org/10.3390/app13137821
Submission received: 1 May 2023 / Revised: 10 June 2023 / Accepted: 29 June 2023 / Published: 3 July 2023
(This article belongs to the Section Food Science and Technology)

Abstract

:
Classifying the quality of dates after harvesting the crop plays a significant role in reducing waste from date fruit production. About one million tons of date fruit is produced annually in Saudi Arabia. Part of this production goes to local factories to be produced and packaged to be ready for use. Classifying and sorting edible dates from inedible dates is one of the first and most important stages in the production process in the date fruit industry. As this process is still performed manually in date production factories in Saudi Arabia, this may cause an increase in the waste of date fruit and reduce the efficiency of production. Therefore, in our paper, we propose a system to automate the classification of dates fruit production. The proposed system focuses on classifying the quality of date fruit at the postharvesting stage. By automating the process of classifying date fruit at this stage, we can increase the production efficiency, raise the classification accuracy, control the product quality, and perform data analysis within the industry. As a result, this increases the market competitiveness, reduces production costs, and increases the productivity. The system was developed based on convolutional neural network models. For the purpose of training the models, we constructed a new image dataset that contains two main classes that have images of date fruit with excellent surface quality and another class for date fruit with poor surface quality. The results show that the used model can classify date fruit based on their surface quality with an accuracy of 97%.

1. Introduction

Food security is a critical objective that must be achieved. It has become a major problem that affects the present and will remain a problem for the next fifty years and beyond [1]. This problem negatively affects our demand for food; the agricultural sector has difficulty keeping up with the global food demand [1]. Moreover, in the Kingdom of Saudi Arabia, local food production does not meet the needs of local demand; production covers only 20% of the demand, and the 80% shortage is filled by foreign imports [2].
There are many reasons for this problem; one of the main reasons is food waste. The Barilla Center for Food and Nutrition ranked Saudi Arabia in the top 25 countries in terms of food waste [2]. One of the fruit crops that produces a large amount of waste in the Kingdom of Saudi Arabia is dates. The Kingdom of Saudi Arabia produces nearly one million tons of date fruit annually [3]. One of the causes of waste in production is related to the manufacturing process and techniques used. Only 7.4% of the production is processed with tools that are not intended for dates only but also for other types of fruits, which may reduce the quality of production [4]. This contributes to an increase in the loss of date crops.
Date fruit misclassification is one of the problems that cause waste in date fruit production, and researchers are trying to find solutions for it. The classification process plays out in several stages for several purposes, and to reduce the waste, researchers have proposed a new system that can carry out this process automatically with fewer misclassification errors. An example of a stage that needs a system to classify the dates is the preharvesting stage; in this stage, the farmer needs to classify the maturity level of dates to check if they are ready to be harvested or not. It is important to build a classifier system for this stage, because if we do not classify the maturity level of dates correctly, they will be thrown away, because they will be harvested before they are ready; this will increase waste in production. Mohammed et al. [5] proposed a solution to this problem by building a system to classify the maturity level of dates before they are harvested. There are also other stages that need new systems to reduce the waste caused by misclassification.
In our paper, we focused on developing a postharvesting automated classifier system. We developed it to enhance the efficiency of the classification and, also, to increase the accuracy of classification. As a result, the waste of production will be reduced. Additionally, it will facilitate data analysis within the industry and assist with the control of the quality of production. After being harvested, date crops still need to be filtered, because some dates are defective due to several causes, such as insects, diseases, or birds eating them. After harvesting the crops, farmers inspect the dates visually and identify good crops and eliminate unwanted crops. Sometimes, farmers classify good dates as defective by mistake and throw them out, which increases waste. This process is carried out manually, and it involves a human error factor. Moreover, it takes a long time and requires a lot of labor effort. Therefore, we built a system to classify dates and find the defective dates among the good ones. Moreover, to train this model we constructed a new image dataset that has two main classes, one for the good dates and another class for defective dates. The model was built based on deep learning and computer vision techniques. In this paper, we introduce a system that can classify the date fruit automatically after harvesting the crop, and our major contributions are
  • Construction of a new image dataset for date fruit with good surface quality and defective surface quality.
  • Localization of the date fruit in the images.
  • Classification of the date fruit for detection based on the surface quality and whether it is good or defective.
  • Exploration of the performance of the CNN models EfficientNetB0, EfficientNetB1, YOLOv5n, and YOLOv5s with respect to visually classifying the date fruit surface quality.
The rest of the paper is organized as follows: Section 2 presents related work, Section 3 presents the materials and methods, and Section 4 presents the performance analysis.

2. Related Work

Researchers in the field of date fruit production have sought to develop new systems to automate relevant processes. They have tried to replace manual methods with new methods that utilize technology. In our paper, we focus on one process—the classification process. This process is used in several stages and for many purposes. In the following, we present a summary of what other authors in the field have tried to classify date fruit by laying out the purposes of these systems and indicating which stage of date fruit production they have developed their classification systems for. Moreover, we summarize their development methods and techniques.
Adnan et al. [6] proposed a system to solve one of the challenges facing the date industry, i.e., classification and sorting without any physical measurement. They proposed a classification system for date fruit to classify and sort four types of date fruit based on color, size, and texture. They trained six algorithms and compared their accuracy results and found that the Support Vector Machine algorithm achieved the best results in terms of accuracy with a small number of images in the training dataset. However, the system could be improved by obtaining more data, because the dataset only contained a few hundred images.
Ghulam [7] proposed a classification system for date fruit, and their system was built to automatically classify four types of date fruit based on shape, size, and texture. The system’s method of extracting the pattern is to use a local binary pattern, the Weber local descriptor. Moreover, the Fisher discrimination ratio is utilized to reduce the dimensionality of the feature set. After extracting patterns from the date fruit type, they are classified by using the Support Vector Machine algorithm. The experiment results show that the system can classify two types of dates, Ajwah and Sukkary, correctly, but is confused between two types, Sagai and Sellaj. In their experiment, the total number of images in the dataset was 320.
Abdulhamid et al. [8] proposed a classification system for seven types of date fruit. The system extracts fifteen features, such as means and standard deviations of color, size, shape descriptors, and texture descriptors, by applying several image-processing algorithms, such as color threshold, color filtering, and region identification. The extracted features are used as classification factors for classification algorithms. To find the best classifier, several classification algorithms were tried: the nearest neighbor, artificial neural network, and linear discriminant analysis. The results show that artificial neural network achieved the highest accuracy. The system was trained on a dataset that contained only 140 images.
Md. Abu Khayer et al. [9] developed a date fruit classification system to classify six types of date fruit to help people in Bangladesh and to inform consumers about the type of date fruit. Their system was developed to classify date fruit based on visual features. These features were identified using CNN models. The system was compared with several models—MobileNetv1 [10], MobileNetv2 [11], and Inception_Resnet [12]—to find which had the highest accuracy. The models were trained on a dataset that contained 2246 images. The comparison showed that MobileNet_V1 achieved the highest accuracy.
Tasneem et al. [13] developed an automatic system that helps the date fruit agricultural industry. The system was proposed to provide better quality, satisfying date fruit consumers, and also to reduce economic loss. In addition, the system has features that help to count and provide an appropriate food quantity. The proposed system has the ability to classify date fruit based on maturity status. Moreover, it can label and count the number of dates inside an image, and it can detect date fruit defects. The development method depends on the use of image-processing algorithms to extract features from the date size, color, and skin texture for the classification task. Moreover, for the defect detection task, they used a thermal camera, FLIR E5. The system processes images from a thermal camera to compare the temperature of the defective area with that of the nondefective area. It identifies the defect area as a cold area (blue spot). After testing the system, it was considered that the system needs to be improved, because sometimes, the system detects the background area as a cold area (defective area).
Mohammed et al. [14] developed a date fruit classification system to help with the decision to harvest date fruit. The decision to harvest date fruit depends on the maturity level and type. Therefore, the authors developed a smart harvesting decision system to classify date fruit based on their type, maturity level, and weight. The developer divided the system into three subsystems—the date-type-estimation subsystem, the date-maturity-estimation subsystem and the date-weight-estimation subsystem. The weight-estimation subsystem was developed based on a Support Vector Machine. Moreover, the date-type-estimation subsystem and the date-maturity-estimation subsystem were developed based on four CNN models, ResNet [15], VGG-19 [16], Inception-V3 [17], and NASNet [18]. Models were trained on an image dataset that was created by the Center of Smart Robotics Research.
Hamdi et al. [19] developed a real-time classification system for date fruit to be used for harvesting robots. This system was developed to work in real time, and it was designed to classify dates based on three factors, type, maturity level, and harvesting decision. Moreover, they created a new image dataset, and this dataset contained more than 8000 images of five different date fruit types at a variety of prematurity and maturity stages. This dataset was trained on a system that used transfer learning and was finetuned on pretrained models. Moreover, the system development utilized two different CNN models, AlexNet [20] and VGG-16 [16].
Mohammed et al. [5] developed a date fruit classification system to make intelligent harvesting decisions. It was developed to classify the maturity level of date fruit. It was designed to classify seven maturity levels of date fruit. The system was built using computer vision and deep learning techniques that utilize three CNN models, VGG-19, Inception-v3, and NASNet, and these were trained on a dataset named the “Date Fruit Dataset for Automated Harvesting and Visual Yield Estimation”. This dataset was built by the Center of Smart Robotics Research.

3. Materials and Methods

In this paper, we applied deep learning and computer vision techniques to develop a date fruit classification system that can classify date fruit based on their quality. The following subsections explain our development steps in detail.

3.1. Dataset Construction

Supervised learning in machine learning is an approach that learns from given labeled data to predict the outcome of new, unseen data. This method was employed in our work to enable the algorithm to learn from the data that were provided. The system was developed to perform a visual classification of date fruit; therefore, the system needs to learn from visual data. Consequently, we built a new image dataset of date fruit that contained two main classes of dates fruit, where each of these classes had 55 images and each of the images contained 10 date fruit. This means that each class represented 550 samples of dates fruit. The first class represented the date fruit with excellent quality and the second class represented the date fruit with poor quality. In addition, we had a third subset that had only five images containing a mix of date fruit with excellent and poor quality.
The steps that were followed to construct this dataset began by collecting and separating dates with excellent surface quality from dates with poor surface quality. After that, we organized a way of capturing the images, such as setting the background and lights to capture all images with the same settings. In the following, each step is explained in detail.

3.1.1. Date Fruit Collection

The date fruit was collected from farms in Al-Ahsa city; this city is located in the eastern province of Saudi Arabia. It grows many types of date fruit, and some studies state that there are 40 distinct types of dates in this city [21]. For our work, we chose and collected the Khalas type, because it is considered to be among the most cultivated and valuable types [21]. We collected the crop after it was harvested, and there was a mix of dates with different surface qualities. We categorized and separated them into two classes, date fruit with excellent surface quality and date fruit with poor surface quality. Figure 1 shows a sample of date fruit with excellent surface quality, and Figure 2 shows a sample of date fruit with poor surface quality.

3.1.2. Photographic Environment Setup

An image dataset should be constructed with excellent quality to be useful for image classification and object detection uses. We prepared a photographic environment to capture images in an organized way. It was built as follows: we made a box with dimensions of width 35 cm, length 45 cm, and height 40 cm. The box was surrounded by a white background on all sides. Then, we fixed a Samsung Galaxy S10 Plus phone camera at the same height as the box with a 23 cm depth toward the center and an 18 cm width. Moreover, we put a white light next to the camera to illuminate the box.

3.1.3. Image Capturing

Setting up the photographic environment helped us to construct an organized dataset. We categorized the dataset into three subsets, where the first subset consisted of 55 images of only date fruit with excellent surface quality, as shown in Figure 3. For the second subset, we captured 55 images that only contained date fruit with poor surface quality, as shown in Figure 4. These were the two main subsets. Each of them had only ten date fruits that were separated from each other. In addition, we created a third subset which contained only five images with a mixture of date fruit with excellent and poor surface quality, as shown in Figure 5.

3.2. Annotation Algorithm

One of the differences between image classification and object detection algorithms is that object detection algorithms have the ability to locate and label the objects inside the images. Moreover, they can classify more than one object in a single image. Object detection algorithms need to be trained on image dataset and annotation files. The annotation files contain information related to each object inside the images, such as the width and the height of the object. We wrote a code to generate these files for our image dataset automatically. This code was developed to generate an annotation file in YOLO format. The code was developed to work on our image dataset especially, because the development was based on dataset properties, such as the white background of images and the distance between objects.
  •      def annotationAlgorithm(path):
  •              image = cv2.imread(path)
  •  
  •              #Convert the image color to binary color
  •              gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  •              thresh = 100
  •              value, thresh = cv2.threshold
  •              (gray, 60, 255, cv2.THRESH_BINARY_INV)
  •  
  •              #Find all joining continuous points with
  •              the same color or intensity by applying
  •              the findContours function from OpenCV-Python package
  •              contours, hierarchy = cv2.findContours
  •              (thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
  •  
  •              for i in range(len(contours)):
  •                    x, y, width, height = cv2.boundingRect(contours[j])
  •  
  •                    #Filter the returned objects to find only
  •                    objects that are a similar size to the date fruit
  •                    if(width >dateWidth and height <dateHeight):
  •                     x2=x+width
  •                     y2=y+height
  •  
  •                     #Convert the obtained valued to YOLO format
  •                     value1 = round((((x+x2)/2)/width),6)
  •                     value2 = round(((((y+y2)/2)/height)),6)
  •                     value3 = round((width /imageWidth),6)
  •                     value4 = round(((height /imageHeight)),6)

3.3. Design and System Development

Our classification system consists of three modules. The first module generates region proposals that may contain date fruits. The second module is a convolutional neural network that is used to extract the feature vector of the date fruits from each proposed region. The third module is a fully connected layer that is used to classify the quality of the date fruit. Figure 6 shows the integrated components. The following subsection explains each component in detail.

3.3.1. Region Proposal

A variety of papers have proposed object detection algorithms designed with the region proposal module. Example include the RCNN, Fast RCNN, and Faster RCNN. All of these modules aim to find all possible objects in the image. These algorithms are designed to locate all possible objects for a variety of classes without knowing their sizes, shapes, or the potential locations of the objects. This makes the module search for more locations to find objects with different shape and sizes. The more regions searched for and proposed, the more processing needed. However, our system is designed for a particular job, which is classifying date fruit only. To reduce the design complexity and to reduce the number of proposed regions, instead of using these modules, we developed a region proposal module based on our dataset’s properties to locate the most probable locations of the date fruits in the image. One of the properties of our dataset is that all images were captured on a white background; this property helped us to use image-processing functions to find the locations of the date fruits in the image.
The processing steps of this module were as follows: The first step was to convert the color image into binary color. This was used to reduce the color contrast caused by the diversity of colors and light reflections. In the second step, we applied the findContours function from the OpenCV-Python package. This function returned the y-axis, x-axis, height, and width for all found objects. However, due to the image noise, the function returned some unwanted areas. Therefore, we added in the third step, a conditional statement, to filter the results and return only objects with the same size date fruit. Figure 7 shows the processing result after each step.

3.3.2. Feature Extraction

In this module, we wanted to extract the visual features that characterize the quality of date fruits. Feature extraction is the process used to extract features that characterize a particular object. This module can be developed in two different ways. It can be developed by using the handcrafted-feature-based methods or by using the deep learning method [22]. Handcrafted-feature-based methods apply computer vision algorithms, such as the SIFT [23], HOG [24], DPM [25], LBP [26], and Haar [27] algorithms. These algorithms work by extracting features, such as the edge and color histogram, to identify some features for a particular object. The second method to extract features from an object is by using the deep learning method. The deep learning method does not need to give and specify the data for the characteristics, such as the edge and histogram color, as is required in feature-based methods. The algorithm extracts characteristics on its own. The features are extracted by passing the image through several layers inside the network.
For our feature extraction module, we used the deep learning method. Particularly, we used the convolutional neural network. The convolutional neural network is an artificial neural network inspired by the sophisticated functionality of human brains [28]. This network was developed to mimic the way that the human brain processes information [29]. Convolutional neural networks can be implemented using different architectures. We used the EfficientNet architecture, which was proposed by Mingxing Tan et al. [30]. Their development idea was to develop scalable models that can scale up the architecture from its width, depth, and resolution. They constructed a family from seven models that were constructed by scaling up the backbone EfficientNet-B0. Table 1 shows the details of the baseline network of the EfficientNet-B0.

3.4. Classification

A fully connected layer was used to classify the feature vectors extracted from the feature extraction module. This layer takes inputs with one-dimensional vectors only. However, the output from the feature extraction module is a group of two-dimensional vectors. Therefore, we added a flattened layer between them to convert the two-dimensional vectors into one-dimensional vectors, as shown in Figure 8. The fully connected layer connected with a layer contains a number of neurons, where each of these neurons represents a class of the dataset.

3.5. Evaluation Metrics

Evaluation metrics are used to measure the performance of models and to show how accurately a model works on a given dataset. In our paper, we measured the accuracy of the system via accuracy, precision, and recall metrics. Then, we compared the results obtained from several models to find out which model works best with our dataset. In the following, we explain each metric and its formula.
True positives and true negatives: This refers to the images that the models predict correctly.
False positives and false negatives: This refers to the images that the models predict incorrectly.
The accuracy metric calculates the percentage of images that are correctly classified; this metric measures how often our models can predict correctly in terms of the overall prediction. It measures this by calculating the true positives and true negatives over the total number of predictions. The formula is shown in Equation (1):
T r u e   p r e d i c t i o n t o t a l   p r e d i c t i o n
Precision is a measurement that measures the percentage of correct positives that are predicted correctly out of the total positives that are predicted either correctly or incorrectly. The formula is shown in Equation (2):
T r u e   p o s i t i v e ( T r u e   p o s i t i v e + F a l s e   p o s i t i v e )
Recall measures the percentage of positive predictions that were correctly predicted over the true positives and the false negatives. The formula is shown in Equation (3):
T r u e   p o s i t i v e ( T r u e   p o s i t i v e + F a l s e   n e g a t i v e )
The mean average precision is the most common metric used in deep learning to evaluate object detection models. The mAP is calculated by taking the mean precision at the overall intersection over the union. The mean average precision measures the accuracy of the algorithm. It works by measuring the number of times your model predicts correctly. To measure the mAP, we need to first compute other metrics, that is, the precision, recall, precision recall curve, and average precision.
m A P 1 n k = 1 k = n = A P k

4. Performance Analysis

The following subsections begin by presenting the experiment setup and then show the performance and results of our experiment; after that, we show the prediction images for each trained model.

4.1. Experimental Setup

In our experiment, we set the epoch size to 60 and the batch size to 32. The epoch size indicates the number of passes of the entire training image dataset completed by the model, and the batch size indicates the number of images utilized in one epoch.
For the developed system, we trained our dataset on two models of EfficientNet, which were EfficientNetB0 and EfficientNetB1, to see the system’s performance with different scales. Each model used a different sized image as the input. Therefore, for every model, we resized the images to make them compatable with the model input size. We resized the images for EfficientNetB0 to 224 and EfficientNetB1 to 240. Moreover, the EfficientNet models were loaded with weights pretrained on ImageNet.
The images that were used during the experiment were taken from the first and the second subsets of our image dataset; the first subset represented the date fruit from the excellent surface quality class, and the second subset represented date fruit from the poor surface quality class. Each subset consisted of 55 images, and each of the images contained 10 date fruits. We split each class into fifty images for training and five images for validation to make a total of one hundred images for training and ten for validation.
Instead of entering the entire image into our models, we cropped all date fruit inside each image and then resized them to the same as the model input size; after that, we inserted each cropped image as a single training image. Cropping date fruit from images of the first and second classes resulted in 500 cropped images for training and 50 cropped images for validation for each class. The total number for the training folder was 1000 images, and for the validation folder, there were 100 images.
For YOLOv5, we trained the dataset on two models—YOLOv5n and YOLOv5s. Moreover, these models were trained with the same image input size, which was 640. In addition, we loaded these models with pretrained weights on COCO. Moreover, for YOLOv5 models, in addition to the training and validation folders, we attached the annotation files. These files were constructed by using our annotation algorithm. They were constructed from the first and the second classes, where 1000 date fruits were annotated for training and 100 date fruits for validation.

4.2. Experimental Results

In our experiment, we trained two EfficientNet models. After training these models, we evaluated their performance with several evaluation metrics. We began the evaluation by calculating the number of correct classifications and the number of misclassifications for each class. These numbers were represented as true positives, representing the correct classification of the first class; false positives, representing the misclassification of the first class; true negatives, representing the correct classification of objects from the second class; and false negatives, representing misclassified objects from the second class. We present these values in Table 2. Moreover, Figure 9 and Figure 10 show a confusion matrix for these values resulting from the EfficientNetB0 and EfficientNetB1 models. The value in the first row and first column represents the TPs, the value in the first row and second column represents the FPs, the value in the second row and first column represents the FNs, and the value in the second row and second column represents the TNs.
Moreover, we evaluated our system with the accuracy evaluation metric. This metric represents how often our system can classify different classes of date fruit correctly based on their surface quality. This can be calculated by summing the total number of correct classifications of the first class with the correct classifications of the second class and subtracting them from the total number of system classifications. Figure 11 shows a plotted graph for the results during various epochs. Moreover, we measured the precision and the recall for our developed system, and the results are shown in Table 3. Figure 12 shows a plotted graph for the results of the precision and recall during several epochs.
We measured the precision, recall, and mean average precision metrics for the YOLOv5 object detection algorithm. Table 4 shows the results of training our dataset on a different scale of YOLOv5 models.
Figure 13 represents the prediction outcome of the system with EfficientNetB0. It shows that the classification accuracy is almost similar for all date fruit in the images, as most of them have a classification accuracy of 73%, even though the system detects all date fruit in the image and classifies most of them correctly. Figure 14 shows the prediction outcome of the system with EfficientNetB1. It achieved excellent results, as it classified most of the date fruit correctly, and it detected all date fruit in the images. However, it still had the same issue of giving similar classification percentages for most of the detected classes.
In Figure 15, we can see the results of the training model YOLOv5n; the model detected many incorrect bounding boxes and misclassified all date fruit with poor surface quality. The YOLOv5s model detected all date fruit correctly. However, the models classified all date fruit as date fruit with excellent surface quality. Even the date fruits with poor surface quality considered to be date fruit with excellent surface quality, as shown in Figure 16.

5. Conclusions

In this study, we examined the feasibility of employing a combination of visual-based features and deep learning techniques to classify the quality of date fruit based on surface quality. The system was developed to classify the quality of date fruit after harvesting the crop. A new image dataset was constructed for our work with two main classes. The dataset was trained with YOLOv5n, YOLOv5s, EfficientNetB0, and EfficientNetB1. The performance evaluation shows success in classifying the quality of date fruit by using conventional neural network models. The best results were achieved with EfficientNetB1 models; they achieved an accuracy of 97% on our image dataset. Although good results were achieved, there are still some limitations in our work that can be improved on in the future, such as the dataset being limited because all of the images were captured at fixed angles, and the size of the dataset being small. In future work, we may increase the number of images in each class. Moreover, we may add more classes to include more types of date fruit.

Author Contributions

Writing—original draft, M.A.; Writing—review & editing, M.A.-S. and H.F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. 3569].

Data Availability Statement

We used our own created dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rosegrant, M.W.; Cline, S.A. Global food security: Challenges and policies. Science 2003, 302, 1917–1919. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Faridi, M.R.; Sulphey, M. Food security as a prelude to sustainability: A case study in the agricultural sector, its impacts on the Al Kharj community in The Kingdom of Saudi Arabia. Entrep. Sustain. Issues 2019, 6, 1536. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Al-Abbad, A.; Al-Jamal, M.; Al-Elaiw, Z.; Al-Shreed, F.; Belaifa, H. A study on the economic feasibility of date palm cultivation in the Al-Hassa Oasis of Saudi Arabia. J. Dev. Agric. Econ. 2011, 3, 463–468. [Google Scholar]
  4. El-Habbab, M.S.; Al-Mulhim, F.; Al-Eid, S.; Abo El-Saad, M.; Aljassas, F.; Sallam, A.; Ghazzawy, H. Assessment of post-harvest loss and waste for date palms in the Kingdom of Saudi Arabia. Int. J. Environ. Agric. Res. 2017, 3, 1–11. [Google Scholar] [CrossRef]
  5. Faisal, M.; Alsulaiman, M.; Arafah, M.; Mekhtiche, M.A. IHDS: Intelligent harvesting decision system for date fruit based on maturity stage using deep learning and computer vision. IEEE Access 2020, 8, 167985–167997. [Google Scholar] [CrossRef]
  6. Abi Sen, A.A.; Bahbouh, N.M.; Alkhodre, A.B.; Aldhawi, A.M.; Aldham, F.A.; Aljabri, M.I. A classification algorithm for date fruits. In Proceedings of the 2020 7th International Conference on Computing for Sustainable Global Development (INDIACom), IEEE, New Delhi, India, 12–14 March 2020; pp. 235–239. [Google Scholar]
  7. Muhammad, G. Automatic date fruit classification by using local texture descriptors and shape-size features. In Proceedings of the 2014 European Modelling Symposium, IEEE, Pisa, Italy, 21–23 October 2014; pp. 174–179. [Google Scholar]
  8. Haidar, A.; Dong, H.; Mavridis, N. Image-based date fruit classification. In Proceedings of the 2012 IV International Congress on Ultra Modern Telecommunications and Control Systems, IEEE, St. Petersburg, Russia, 3–5 October 2012; pp. 357–363. [Google Scholar]
  9. khayer, M.A.; Hasan, M.S.; Sattar, A. Arabian date classification using CNN algorithm with various pre-trained models. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), IEEE, Tirunelveli, India, 4–6 February 2021; pp. 1431–1436. [Google Scholar]
  10. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  11. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
  12. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  13. Najeeb, T.; Safar, M. Dates maturity status and classification using image processing. In Proceedings of the 2018 International Conference on Computing Sciences and Engineering (ICCSE), IEEE, Colombo, Sri Lanka, 8–11 August 2018; pp. 1–6. [Google Scholar]
  14. Faisal, M.; Albogamy, F.; Elgibreen, H.; Algabri, M.; Alqershi, F.A. Deep learning and computer vision for estimating date fruits type, maturity level, and weight. IEEE Access 2020, 8, 206770–206782. [Google Scholar] [CrossRef]
  15. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  16. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  17. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  18. Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar]
  19. Altaheri, H.; Alsulaiman, M.; Muhammad, G. Date fruit classification for robotic harvesting in a natural environment using deep learning. IEEE Access 2019, 7, 117115–117133. [Google Scholar] [CrossRef]
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  21. Ismail, A.I.; Hassaballa, A.A.; Almadini, A.M.; Daffalla, S. Analyzing the Spatial Correspondence between Different Date Fruit Cultivars and Farms’ Cultivated Areas, Case Study: Al-Ahsa Oasis, Kingdom of Saudi Arabia. Appl. Sci. 2022, 12, 5728. [Google Scholar] [CrossRef]
  22. Li, L.; Feng, X. Face anti-spoofing via deep local binary pattern. In Deep Learning in Object Detection and Recognition; Springer: Singapore, 2019; pp. 91–111. [Google Scholar]
  23. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  24. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), IEEE, San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  25. Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  27. Viola, P.; Jones, M.J. Robust real-time face detection. Int. J. Comput. Vis. 2004, 57, 137–154. [Google Scholar] [CrossRef]
  28. Wang, S.C.; Wang, S.C. Cellular Automata. In Interdisciplinary Computing in Java Programming; Springer: Berlin/Heidelberg, Germany, 2003; pp. 147–165. [Google Scholar]
  29. Agatonovic-Kustrin, S.; Beresford, R. Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J. Pharm. Biomed. Anal. 2000, 22, 717–727. [Google Scholar] [CrossRef] [PubMed]
  30. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
Figure 1. Date fruit with excellent surface quality.
Figure 1. Date fruit with excellent surface quality.
Applsci 13 07821 g001
Figure 2. Date fruit with poor surface quality.
Figure 2. Date fruit with poor surface quality.
Applsci 13 07821 g002
Figure 3. First subset.
Figure 3. First subset.
Applsci 13 07821 g003
Figure 4. Second subset.
Figure 4. Second subset.
Applsci 13 07821 g004
Figure 5. Third subset.
Figure 5. Third subset.
Applsci 13 07821 g005
Figure 6. Proposed integrated system.
Figure 6. Proposed integrated system.
Applsci 13 07821 g006
Figure 7. Region proposal.
Figure 7. Region proposal.
Applsci 13 07821 g007
Figure 8. Classification module.
Figure 8. Classification module.
Applsci 13 07821 g008
Figure 9. Confusion matrix for EfficientNetB0.
Figure 9. Confusion matrix for EfficientNetB0.
Applsci 13 07821 g009
Figure 10. Confusion matrix for EfficientNetB1.
Figure 10. Confusion matrix for EfficientNetB1.
Applsci 13 07821 g010
Figure 11. Plotted graph for the accuracy results during various epochs for the EfficientNet models.
Figure 11. Plotted graph for the accuracy results during various epochs for the EfficientNet models.
Applsci 13 07821 g011
Figure 12. Plotted graph for precision and recall results for the EfficientNet models.
Figure 12. Plotted graph for precision and recall results for the EfficientNet models.
Applsci 13 07821 g012
Figure 13. Prediction outcome of EfficientNetB0.
Figure 13. Prediction outcome of EfficientNetB0.
Applsci 13 07821 g013
Figure 14. Prediction outcome of EfficientNetB1.
Figure 14. Prediction outcome of EfficientNetB1.
Applsci 13 07821 g014
Figure 15. Prediction outcome of YOLOv5n.
Figure 15. Prediction outcome of YOLOv5n.
Applsci 13 07821 g015
Figure 16. Prediction outcome of YOLOv5s.
Figure 16. Prediction outcome of YOLOv5s.
Applsci 13 07821 g016
Table 1. EfficientNet-B0 baseline network.
Table 1. EfficientNet-B0 baseline network.
StageOperationResolutionChannelsLayers
1Conv3 × 3224 × 224321
2MBConv1, k3 × 3112 × 112161
3MBConv6, k3 × 3112 × 112242
4MBConv6, k5 × 556 × 56402
5MBConv6, k3 × 328 × 28803
6MBConv6, k5 × 514 × 141123
7MBConv6, k5 × 514 × 141924
8MBConv6, k3 × 37 × 73201
9Conv1 × 1 & Pooling & FC7 × 712801
Table 2. Results of TP, FP, TN and FN for the EfficientNet models.
Table 2. Results of TP, FP, TN and FN for the EfficientNet models.
Evaluation MetricEfficientNetB0EfficientNetB1
True Positives9697
False Positives43
True Negatives9697
False Negatives43
Table 3. Result of Accuracy, Precision, and Recall for the EfficientNet models.
Table 3. Result of Accuracy, Precision, and Recall for the EfficientNet models.
Evaluation MetricEfficientNetB0EfficientNetB1
Accuracy9697
Precision9697
Recall9697
Table 4. Result of Precision, Recall and Mean Average Precision for the YOLO models.
Table 4. Result of Precision, Recall and Mean Average Precision for the YOLO models.
Evaluation MetricYOLOv5nYOLOv5s
Precision0.950550.9043
Recall0.911170.98783
Mean Average Precision0.957750.99211
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almomen, M.; Al-Saeed, M.; Ahmad, H.F. Date Fruit Classification Based on Surface Quality Using Convolutional Neural Network Models. Appl. Sci. 2023, 13, 7821. https://doi.org/10.3390/app13137821

AMA Style

Almomen M, Al-Saeed M, Ahmad HF. Date Fruit Classification Based on Surface Quality Using Convolutional Neural Network Models. Applied Sciences. 2023; 13(13):7821. https://doi.org/10.3390/app13137821

Chicago/Turabian Style

Almomen, Mohammed, Majed Al-Saeed, and Hafiz Farooq Ahmad. 2023. "Date Fruit Classification Based on Surface Quality Using Convolutional Neural Network Models" Applied Sciences 13, no. 13: 7821. https://doi.org/10.3390/app13137821

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop