Next Article in Journal
Person Re-Identification Network Based on Edge-Enhanced Feature Extraction and Inter-Part Relationship Modeling
Previous Article in Journal
Advanced Multichannel Lighting Control Systems in Heritage Environments: Case Study of the Cathedral of Seville
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simplified Deep Learning for Accessible Fruit Quality Assessment in Small Agricultural Operations

by
Víctor Zárate
1 and
Danilo Cáceres Hernández
1,2,*
1
Facultad de Ingeniería Eléctrica, Universidad Tecnológica de Panamá, Panama 0819-07289, Panama
2
Sistema Nacional de Investigación (SNI), Secretaría Nacional de Ciencia, Tecnología e Innovación (SENACYT), Panama 0816-02852, Panama
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8243; https://doi.org/10.3390/app14188243
Submission received: 10 July 2024 / Revised: 31 August 2024 / Accepted: 4 September 2024 / Published: 13 September 2024

Abstract

:
Fruit quality assessment is vital for ensuring consumer satisfaction and marketability in agriculture. This study explores deep learning techniques for assessing fruit quality, focusing on practical deployment in resource-constrained environments. Two approaches were compared: training a convolutional neural network (CNN) from scratch and fine-tuning a pre-trained MobileNetV2 model through transfer learning. The performance of these models was evaluated using a subset of the Fruits-360 dataset chosen to simulate real-world conditions for small-scale producers. MobileNetV2 was selected for its compact size and efficiency, suitable for devices with limited computational resources. Both approaches achieved high accuracy, with the transfer learning model demonstrating faster convergence and slightly better performance. Feature map visualizations provided insight into the model’s decision-making, highlighting damaged areas of fruits which enhances transparency and trust for end users. This study underscores the potential of deep learning models to modernize fruit quality assessment, offering practical, efficient, and interpretable tools for small-scale farmers.

1. Introduction

Fruit quality assessment is a cornerstone of agricultural practices, playing a crucial role in ensuring the marketability, safety, and consumer satisfaction of produce. Traditional methods, such as manual inspection and basic chemical tests, have long been employed to evaluate attributes like ripeness, texture, and flavor. However, these approaches are often labor-intensive and subjective, and they lack the precision required for modern agricultural demands, particularly as global food supply chains become increasingly complex and consumer expectations rise [1,2].
In recent years, non-destructive testing methods, including spectroscopy, imaging technologies, and machine learning, have emerged as more efficient alternatives for fruit quality assessment. These technologies offer the potential for rapid, accurate evaluations without causing damage to the produce, making them particularly valuable for large-scale operations [3,4,5]. Nevertheless, the high cost of advanced equipment and the technical expertise required to operate these systems pose significant barriers, especially for small-scale farmers and producers in resource-limited settings [6].
To address these challenges, this study proposes a machine learning-based approach to fruit quality assessment, with a specific focus on developing methods that are both precise and accessible. By leveraging deep learning techniques, particularly convolutional neural networks (CNNs), this research explores two approaches: training a model from scratch and fine-tuning a pre-trained model using transfer learning. The goal is to determine which method offers the best balance of accuracy and practicality, especially when applied to a subset of the Fruits-360 dataset, which includes a diverse range of fruit types and conditions [7,8].
The use of deep learning in this context is driven by its ability to automatically extract and learn relevant features from images, potentially improving the robustness and generalization of assessments. Through the comparative analysis of training from scratch versus transfer learning, this study provides valuable insights into the advantages and limitations of each approach, ultimately contributing to the development of more scalable and cost-effective solutions for fruit quality assessment [9].
Machine learning algorithms rely on feature extraction for classification. Some of the methods commonly used for feature extraction are presented in Figure 1.
  • Laser-based techniques: These involve the use of laser scanners to measure fruit size, shape, and surface characteristics. Laser scanning precision allows for the detailed 3D modeling of fruits [10,11]. Lasers can also be used in orchard harvesting for the detection of the location of fruits [12,13].
  • Spectral imaging (visible and non-visible): Spectral imaging techniques, including near-infrared (NIR), hyperspectral, and multispectral imaging, are used to assess both the external and internal quality of fruit. These methods can detect chemical compositions, moisture content, and internal defects not visible to the naked eye [14,15].
  • Gas analysis: Gas sensors and electronic noses are used for the detection of volatile organic compounds emitted by fruits. This method is particularly effective for assessing fruit ripeness and quality, as different ripening stages are associated with distinct gas profiles [16,17].
  • Ultrasound technology: Ultrasound is a non-destructive technique used to probe the internal structure of fruits, assessing qualities such as texture, density, and the presence of internal defects or rot, without affecting the fruit’s quality [18].
  • Computer vision and digital photography: In these methods, high-resolution images are captured using digital cameras and processed via computer vision algorithms for surface defect detection, size and shape analysis, and color grading assessment [19,20].
  • X-ray imaging: X-ray technology provides insights into the internal structure of fruits, detecting internal defects or foreign objects. It is especially useful for bulk scanning in processing plants [21,22,23].
  • Electrical impedance: This method involves measuring the resistance of fruit tissue to an electrical current and investigating its correlation with fruit firmness and ripeness [24].
  • Thermal imaging: Thermal cameras are used to detect temperature variations on the fruit’s surface, which can indicate ripening stages or internal defects [25,26,27].
Despite the advancements in non-destructive testing, the integration of deep learning into these methods represents a significant step toward modernizing agricultural practices. This research is particularly concerned with increasing the accessibility of these advanced techniques to a broader range of users, including small-scale farmers who may not have access to sophisticated tools. By proposing a method that is both efficient and interpretable, this study contributes to enhancing food security and sustainability, which is increasingly important in the face of global challenges such as climate change and population growth [28,29].
This study does not aim to revolutionize the field but to provide a modest yet meaningful advancement in the ongoing efforts to improve fruit quality assessment. By focusing on practical applications and the potential for widespread adoption, this research is expected to make a positive impact on the agricultural sector, particularly in resource-limited regions [30].

2. Materials and Methods

In this study, a deep learning model was developed to distinguish between five classes of fruits using transfer learning [31]. The study was performed using the Fruits-360 dataset and a pre-trained MobileNetV2 model for transfer learning. The analysis and training of the model were conducted in a cloud-based environment using Google Colaboratory.
The main goal was to develop an efficient and simple model for fruit damage classification with comparable or better results than those achieved by humans. To this end, the inference algorithm should be computationally cheap with a small file size, which facilitates its implementation in practice for small-scale fruit production. Therefore, the proposed approach involved combining a traditional computer vision system with a neural network. Computer vision allows for the efficient extraction of features containing intuitive and highly dense fruit classification data, such as circularity and color [32], while a neural network was used for the extraction of lower-level features such as texture to determine the fruit quality. The proposed workflow is visualized in Figure 2.
The scope of this work was limited to studying and comparing the options for the neural network architecture. Therefore, the traditional computer vision features were not considered.
We used a subset of the Fruits-360 dataset, which is a publicly available dataset containing images of fruits and vegetables.
This dataset was selected due to the limited availability of labeled datasets that include both images of both high- and low-quality fruits, reflecting the real-world challenges faced by many small producers or specialized farms, who typically work with a limited variety of fruits. By focusing on a smaller subset, we sought to simulate the conditions under which these producers operate, providing a model that is more relevant and applicable to their specific needs. The addition of a second fruit type, as well as similar fruits such as the golden apple and red apple, was carefully considered to introduce diversity in the dataset, thereby reducing the risk of overfitting and ensuring the better generalizability of the model across different but related categories.
The subset of the dataset used in this study included five classes of fruits, namely apple_crimson_snow_1, apple_golden_2, apple_rotten_1, pear_1, and pear_2. The dataset was divided into training and validation sets, with 1,415 images used for training and 706 images for validation. All images in the dataset are RGB (Red, Green, Blue) color images, which is a standard input format for convolutional neural networks, and they have a resolution of roughly 450 × 450 pixels each. This is a huge advantage when distinguishing between fruits with roughly the same shape and texture features. Some examples of these images are illustrated in Figure 3.
We used Google Colaboratory, which is a cloud-based development environment that provides an interactive coding interface and free access to GPUs, essential for deep learning. This platform is particularly suitable for computation-heavy tasks as it provides high computational power, thus significantly accelerating the training process compared to traditional CPUs. However, as we used the free version, the processing speed was still limited. TensorFlow, an open-source deep learning library developed by Google, was also used, and the code was written in Python 3.10.
To enhance the accuracy of the model, a transfer learning approach was used. This is a machine learning technique where a pre-trained model is fine-tuned for a different but related task. In this study, MobileNetV2, a pre-trained model developed by Google, which is known for its efficiency and performance on mobile devices, was used as the base model for transfer learning. The model was pre-trained on the ImageNet dataset, encompassing over a million images belonging to 1000 different classes. Transfer learning is highly beneficial for relatively small datasets similar to that used in our study, as it facilitates leveraging the knowledge acquired by the model from a much larger dataset, thus enhancing performance.
In the MobileNetV2 model, the top of the network, which typically contains fully connected layers, was not included. Instead, a custom head was added to tailor the model to the specific task of classifying five different types of fruits. This custom head contained a global average pooling layer followed by a dense layer with 1024 units and a ReLU activation function. The final layer comprised another dense layer with 5 units corresponding to the five classes and a Softmax activation function. Global average pooling reduces the spatial dimensions of the feature maps and helps in reducing the number of parameters, which is essential for preventing overfitting.
MobileNetV2 was chosen as the base model for transfer learning primarily due to its compact size and efficiency. For small-scale producers who may not have access to high-end computational resources, the model size is a critical factor. MobileNetV2 is known for its balance between accuracy and computational cost, making it an ideal candidate for applications where resource constraints are a significant consideration. Using a model that is both lightweight and effective ensures that the proposed method can be utilized in practical settings with hardware limitations.
For the control group, another model with approximately the same number of parameters was trained. However, it was trained from scratch without using transfer learning. In Appendix A Figure A1, the custom model and the head of the transfer model are compared (the entirety of the MobileNetV2 model is not shown due to its size).
Data augmentation techniques, including horizontal flipping, shearing, and zooming, were applied to the training dataset to prevent overfitting and enhance the model’s generalizability. Data augmentation artificially increases the size of the training dataset by creating modified versions of images in the dataset and helps in training the model using different data variations, which is particularly useful in improving generalization. Figure 4 illustrates these data augmentation operations. The main idea behind data augmentation is to synthetically increase the size and diversity of the dataset, which helps in preventing overfitting and obtaining the volume of data needed to achieve high performance.
The image data generator used for data augmentation also employed a pre-processing function specific to MobileNetV2, which involved scaling pixel values appropriately and was applied to the images using the ImageDataGenerator in TensorFlow. This process ensured that the data that was input into the network would have the same characteristics as the images used for training.
The model was compiled using the Adam optimizer with a learning rate of 0.0001 and a categorical cross-entropy loss function, which was minimized during training by updating the model parameters. Accuracy and loss were used as evaluation metrics. The model was trained for 50 epochs, with each epoch being an iteration over the entire training dataset. During each epoch, the model learned to make better predictions by adjusting its weights based on the training data and the loss function.
Freezing the weights of the pre-trained model was essential to retain the knowledge gained from the ImageNet dataset, and fine-tuning the custom head ensured that the model was tailored to the specifics of the new dataset. A batch size of 32 was used for both training and validation datasets.
The validation dataset was utilized to monitor the model’s performance on unseen data, which is crucial for proper generalization and validation. In the following section, the results obtained using this approach will be presented. Additionally, it is worth mentioning that the validation data underwent the same pre-processing as the training data but were not subjected to data augmentation. This is because the validation set should remain a true representation of the unseen data that the model might encounter in the real world.

3. Results

In this section, the outcomes of the experiments with the transfer learning model utilizing the MobileNetV2 architecture and the custom CNN model built from scratch are presented. The results shed light on the differences in the performance of these two models when trained on the Fruits-360 dataset for fruit classification.

3.1. Model Training

The training process for both models was conducted over 5 and 50 epochs with a batch size of 32. The transfer learning model leveraged the pre-trained MobileNetV2 model with weights from ImageNet. The custom model was built with three convolutional layers and initialized with random weights.

3.2. Transfer Learning Model

The transfer learning model exhibited rapid convergence to a high level of training accuracy. By the end of the training process, the transfer learning model achieved a training accuracy of 100% and a validation accuracy of 100%. The final loss values on the training and validation datasets were 0.000022 and 0.000055, respectively.
Interestingly, the validation accuracy surpassed the training accuracy at certain points during training. This can be attributed to the data augmentation techniques applied to the training dataset, which exposed the model to more challenging and varied examples during training than the validation set.

3.3. Custom CNN Model

In contrast, the custom CNN model demonstrated a slower convergence. The final training accuracy was 100%, while the validation accuracy reached 100%. The loss values for the training and validation datasets were 0.00053 and 0.00060, respectively, after training.
The feature maps of all the layers are illustrated in Figure 5. Feature maps are a critical component in deep learning, especially in convolutional neural networks, as they are used as the output of convolutional layers, capturing the results of filtering the input images. Each feature map highlights specific features detected in the image, such as edges, textures, or patterns, at varying levels of complexity. By visualizing these maps, researchers and practitioners can gain insight into how a convolutional neural network processes and interprets image data, which is invaluable for understanding, evaluating, and improving model performance. Feature map analysis also aids in enhancing the interpretability of deep learning models, thus improving their ability to explain their decisions and increasing the reliability of predictions.
Figure 6 demonstrates an enhanced view of these feature maps, indicating that the network learned to identify certain zones with a specific texture as key parts of the classification process, even in regions with even color distribution.

3.4. Comparison between Models

The difference in performance between the transfer learning model and the custom model was substantial. The transfer learning model exhibited a swifter convergence and achieved higher accuracy on the validation set primarily due to the utilization of pre-trained weights, which encompassed knowledge from a vast dataset (ImageNet) and was fine-tuned on the specific task of fruit classification. In contrast, the custom model began with random weights, and thus had to learn features from scratch, which is often less efficient. The training and validation loss and accuracy for both custom and transfer learning models are illustrated in Figure 7.
As can be seen, due to the similarity of images, the small dataset used, and the sheer size and capability of these networks, the accuracy and loss rapidly reached a steady state, even for the less powerful custom network. Figure 8 demonstrates the same diagrams for five epochs.
As mentioned previously, in some instances, the validation accuracy was higher than the training accuracy for the transfer learning model. This phenomenon is generally counterintuitive as models tend to perform better on the training dataset. However, in this case, data augmentation was applied to the training set; thus, the model was exposed to a more diverse set of images during training than during validation. This rigorous training process made the model more robust, and therefore it sometimes found the non-augmented validation set easier to handle, resulting in higher validation accuracy.

4. Discussion

Recent studies have demonstrated the effectiveness of deep learning models in fruit quality detection and grading, highlighting their potential to improve accuracy and efficiency in agricultural practices [33,34].
Previous studies have demonstrated the efficacy of deep learning models in fruit quality assessment, often emphasizing accuracy in controlled environments. Direct comparisons with numerical results are difficult due to differences in datasets or experimental setups. Our approach, while modest in its scope, demonstrates comparable accuracy using a more constrained dataset, reflecting real-world scenarios faced by small-scale producers. This suggests that even with limited data, our method can achieve practical results, which is crucial for resource-limited settings.
The transfer learning model based on MobileNetV2, as well as the custom model, reached high accuracy within just one epoch. This rapid convergence presents possibilities but also raises some concerns. On the one hand, it indicates that the models can learn patterns efficiently. However, it also points to the potential limitations of the dataset in terms of size and diversity. The rapidly achieved near-perfect accuracy of both models is indicative of the dataset potentially having highly similar images, which may not comprehensively represent the diverse conditions under which the model would be expected to operate in real-world scenarios. In essence, the rapid convergence to high accuracy may reflect potential overfitting to a dataset that might not be sufficiently challenging or diverse.
Notably, despite these concerns, the custom model, with its simpler architecture, had comparable performance to the transfer learning model with this dataset. This highlights the utility of exploring even basic models for the initial stages of development in fruit quality assessment systems, especially when resources are limited, as is the case for small-scale farmers and practitioners.
The proposed approach is scalable, as it can be trained on larger, more diverse datasets with more resources, potentially further improving accuracy and generalization. Deployment on edge devices for real-time assessment and incorporating temporal information for tracking ripening are directions for future research. Additionally, in the future, similar methods can be used as base models for transfer learning in applications specific to the fruit domain.
The feature maps used for the identification of damaged areas in fruits revealed that the model’s neurons tend to activate around damaged spots on the fruit, indicating that the model focused on relevant features such as texture changes or discoloration.
This enhances the understanding of the model’s decisions for end users, especially small-scale farmers. For instance, if the model classifies fruit as damaged, the visualization can show the specific areas that influenced this decision, making the tool more transparent and thus enhancing its reliability.
This approach helps ensure that the AI model is not just a “black box” but a tool that users can feel confident using, knowing that its decisions are based on visible and understandable features. By making these tools more interpretable, the goal is to support wider adoption among users with varying levels of technical expertise.
Fruit quality assessment is vital for farmers, consumers, and retailers. An automated, accurate assessment tool can contribute to ensuring high standards of produce, reducing waste, and optimizing supply chains. Furthermore, this holds global implications for food safety and sustainable agriculture, particularly benefiting small-scale farmers in resource-constrained settings.
The models’ performance highly depends on the quality and diversity of the dataset. If the dataset does not represent a wide range of real-world scenarios, the models may have limited generalizability in practical applications, which is partly the reason why data augmentation was applied. In future work, this can be addressed by using more diverse types of fruits and applying more aggressive data augmentation, which are good strategies for small-scale farmers using specific types of fruits.
The rapid convergence to high accuracy raises concerns about overfitting, especially in the absence of a more diverse and larger dataset. Overfitting can lead to the model performing exceptionally well on the training data but poorly on unseen real-world data.
Overfitting is usually a concern when the network does well on training data but has a poor performance on testing data. However, our network showed good performance on both. Data augmentation was introduced to avoid overfitting. If the test performance drops and overfitting appears to be of higher concern, then some regularization techniques could be implemented.
There is also the possibility that our models have too many parameters, and we could reduce model complexity. Therefore, maintaining good performance while reducing model size and computation is needed to run it. This is crucial in resource-constrained environments.
Even though a simpler architecture was used for the custom model, deep learning models in general can sometimes act as black boxes, making it difficult to understand the internal logic and reasoning behind specific predictions. This can be a concern in contexts where a clear explanation is crucial for stakeholder trust and regulatory compliance.
Addressing these limitations through continued research and refinement of the models, as well as expanding datasets to include a wider range of conditions, is crucial for the successful deployment of deep learning models for fruit quality assessment.

5. Conclusions

This study explores the use of deep learning models for fruit quality assessment, focusing on the comparison between training a model from scratch and using transfer learning with MobileNetV2. The results indicate that while both approaches achieved high accuracy, the transfer learning model demonstrated faster convergence and slightly better performance, particularly in the context of limited data availability.
The choice of the Fruits-360 dataset and its specific subset was driven by the need to simulate real-world conditions faced by small-scale producers who typically work with limited fruit varieties. By including a second fruit type, such as the golden apple, the objective was to introduce diversity and reduce the risk of overfitting, thereby improving the model’s generalization capability.
MobileNetV2 was selected due to its compact size and efficiency, making it an ideal candidate for deployment in environments with limited computational resources. This aligns with the goal of developing a practical and accessible tool for fruit quality assessment that can be used by small-scale farmers and producers.
The feature maps provided insights into the model’s decision-making process, during which the neurons tended to activate around damaged spots on fruits. This interpretability is crucial for gaining the trust of end users who may have limited technical expertise, thus promoting the broader adoption of AI-based tools in agriculture by enhancing the transparency of the model’s decisions.
However, the rapid convergence to high accuracy also highlighted potential limitations of the dataset, suggesting that more diverse and challenging data may be required to ensure robust performance in real-world scenarios. Future work should focus on expanding the dataset and exploring additional techniques to further improve the model’s generalization and explainability.
While this study represents a modest contribution to the field, it underscores the potential of deep learning models in modernizing fruit quality assessment. By focusing on accessibility, efficiency, and interpretability, this research provides a foundation for developing practical tools that can benefit small-scale farmers and contribute to sustainable agricultural practices.

Author Contributions

Conceptualization, V.Z. and D.C.H.; methodology, V.Z.; software, V.Z.; validation, V.Z.; formal analysis, V.Z.; investigation, V.Z. and D.C.H.; resources, V.Z.; data curation, V.Z.; writing—original draft preparation, V.Z.; writing—review and editing, V.Z. and D.C.H.; visualization, V.Z.; supervision, D.C.H.; project administration, D.C.H.; funding acquisition, D.C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Sistema Nacional de Investigaciones (SNI) of Panama of the Secretaría Nacional de Ciencia, Tecnología e Innovación de Panamá (SENACYT). [SNI 24-2024]. The authors would like to thank the Universidad Tecnológica de Panamá for their administrative support in the advancement of this project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Code used to train the model:
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.utils import plot_model
from google.colab import drive
import os
 
# Mount Google Drive
drive.mount(‘/content/gdrive’)
os.chdir(‘/content/gdrive/My Drive/Testing de Frutas/TestingDeFrutas’)
 
# Define image data generators
train_datagen = ImageDataGenerator(
        rescale = 1./255,
        shear_range = 0.2,
        zoom_range = 0.2,
        horizontal_flip = True
)
 
validation_datagen = ImageDataGenerator(rescale = 1./255)
 
# Load training and validation data
train_data = train_datagen.flow_from_directory(
        directory = ‘/content/gdrive/My Drive/Testing de Frutas/TestingDeFrutas/TrainingSet’,
        target_size = (224, 224),
        batch_size = 32,
        class_mode = ‘categorical’
)
 
validation_data = validation_datagen.flow_from_directory(
        directory = ‘/content/gdrive/My Drive/Testing de Frutas/TestingDeFrutas/ValidationSet’,
        target_size = (224, 224),
        batch_size = 32,
        class_mode = ‘categorical’
)
 
# Build and train the custom model
custom_model = models.Sequential([
        layers.Conv2D(16, (3, 3), activation = ‘relu’, input_shape = (224, 224, 3)),
        layers.MaxPooling2D(2, 2),
        layers.Conv2D(32, (3, 3), activation = ‘relu’),
        layers.MaxPooling2D(2, 2),
        layers.Conv2D(64, (3, 3), activation = ‘relu’),
        layers.MaxPooling2D(2, 2),
        layers.Flatten(),
        layers.Dense(1024, activation = ‘relu’),
        layers.Dense(5, activation = ‘softmax’)
])
 
custom_model.compile(optimizer = optimizers.Adam(learning_rate = 0.0001),
                                          loss = ‘categorical_crossentropy’,
                                          metrics = [‘accuracy’])
 
history_custom = custom_model.fit(train_data, epochs = 50, validation_data = validation_data)
 
# Load the pre-trained MobileNetV2 model
base_model = MobileNetV2(weights = ‘imagenet’, include_top = False, input_shape = (224, 224, 3))
 
# Freeze the layers in the base model
for layer in base_model.layers:
        layer.trainable = False
 
# Create the custom head for the network
x = base_model.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(1024, activation = ‘relu’)(x)
predictions = layers.Dense(5, activation = ‘softmax’)(x)
 
# Build and train the transfer learning model
transfer_model = models.Model(inputs = base_model.input, outputs = predictions)
 
transfer_model.compile(optimizer = optimizers.Adam(learning_rate = 0.0001),
                                          loss = ‘categorical_crossentropy’,
                                          metrics = [‘accuracy’])
 
history_transfer = transfer_model.fit(train_data, epochs = 50, validation_data = validation_data)
 
# Plot models architecture
plot_model(custom_model, to_file = ‘custom_model.png’, show_shapes = True)
plot_model(transfer_model, to_file = ‘transfer_model.png’, show_shapes = True)
 
Code used to visualize the accuracy and loss function evolutions:
import matplotlib.pyplot as plt
 
# Plot training & validation accuracy values
plt.figure(figsize = (12, 5))
 
epochs_range = range(1, len(history_transfer.history[‘accuracy’]) + 1)
 
plt.subplot(1, 2, 1)
plt.plot(epochs_range, history_transfer.history[‘accuracy’])
plt.plot(epochs_range, history_transfer.history[‘val_accuracy’])
plt.plot(epochs_range, history_custom.history[‘accuracy’])
plt.plot(epochs_range, history_custom.history[‘val_accuracy’])
 
plt.title(‘Model Accuracy’)
plt.ylabel(‘Accuracy’)
plt.xlabel(‘Epoch’)
plt.legend([‘Transfer Train’, ‘Transfer Validation’, ‘Custom Train’, ‘Custom Validation’], loc = ‘upper left’)
 
# Plot training & validation loss values
plt.subplot(1, 2, 2)
plt.plot(epochs_range, history_transfer.history[‘loss’])
plt.plot(epochs_range, history_transfer.history[‘val_loss’])
plt.plot(epochs_range, history_custom.history[‘loss’])
plt.plot(epochs_range, history_custom.history[‘val_loss’])
 
plt.title(‘Model Loss’)
plt.ylabel(‘Loss’)
plt.xlabel(‘Epoch’)
plt.legend([‘Transfer Train’, ‘Transfer Validation’, ‘Custom Train’, ‘Custom Validation’], loc = ‘upper left’)
 
plt.show()
Figure A1. Head of the transfer model (left) and custom network (right) visualization.
Figure A1. Head of the transfer model (left) and custom network (right) visualization.
Applsci 14 08243 g0a1

References

  1. Baldwin, E.; Bai, J.; Plotto, A.; Ritenour, M. Citrus fruit quality assessment; producer and consumer perspectives. Stewart Postharvest Rev. 2014, 10, 1–7. [Google Scholar]
  2. Bösch, Y.; Britt, E.; Perren, S.; Naef, A.; Frey, J.E.; Bühlmann, A. Dynamics of the Apple Fruit Microbiome after Harvest and Implications for Fruit Quality. Microorganisms 2021, 9, 272. [Google Scholar] [CrossRef]
  3. Albahar, M. A Survey on Deep Learning and Its Impact on Agriculture: Challenges and Opportunities. Agriculture 2023, 13, 540. [Google Scholar] [CrossRef]
  4. Liu, S.; Qiao, Y.; Li, J.; Zhang, H.; Zhang, M.; Wang, M. An Improved Lightweight Network for Real-Time Detection of Apple Leaf Diseases in Natural Scenes. Agronomy 2022, 12, 2363. [Google Scholar] [CrossRef]
  5. Aherwadi, N.; Mittal, U. Fruit quality identification using image processing, machine learning, and deep learning: A review. Adv. Appl. Math. Sci. 2022, 21, 2645–2660. [Google Scholar]
  6. Dhiman, B.; Kumar, Y.; Kumar, M. Fruit quality evaluation using machine learning techniques: Review, motivation and future perspectives. Multimed. Tools Appl. 2022, 81, 16255–16277. [Google Scholar] [CrossRef]
  7. Mamatkulovich, B.B.; Qizi, T.S.X.; Qizi, T.O.M.; O‘G‘Li, X.D.S. Simplified machine learning for image-based fruit quality assessment. Eurasian J. Res. Dev. Innov. 2023, 19, 8–12. [Google Scholar]
  8. Mahanti, N.K.; Pandiselvam, R.; Kothakota, A.; Ishwarya, S.P.; Chakraborty, S.K.; Kumar, M.; Cozzolino, D. Emerging non-destructive imaging techniques for fruit damage detection: Image processing and analysis. Trends Food Sci. Technol. 2022, 120, 418–438. [Google Scholar] [CrossRef]
  9. Adedeji, A.A.; Ekramirad, N.; Rady, A.; Hamidisepehr, A.; Donohue, K.D.; Villanueva, R.T.; Parrish, C.A.; Li, M. Non-Destructive Technologies for Detecting Insect Infestation in Fruits and Vegetables under Postharvest Conditions: A Critical Review. Foods 2020, 9, 927. [Google Scholar] [CrossRef]
  10. Patel, A.; Kadam, P.; Naik, S. Color, Size and Shape Feature Extraction Techniques for Fruits: A Technical Review. Int. J. Comput. Appl. 2015, 130, 6–10. [Google Scholar] [CrossRef]
  11. Shiddiq, M.; Fitmawati; Anjasmara, R.; Sari, N.; Hefniati. Ripeness detection simulation of oil palm fruit bunches using laser-based imaging system. AIP Conf. Proc. 2017, 1801, 050003. [Google Scholar] [CrossRef]
  12. Gené-Mola, J.; Gregorio, E.; Guevara, J.; Auat, F.; Sanz-Cortiella, R.; Escolà, A.; Llorens, J.; Morros, J.-R.; Ruiz-Hidalgo, J.; Vilaplana, V.; et al. Fruit detection in an apple orchard using a mobile terrestrial laser scanner. Biosyst. Eng. 2019, 187, 171–184. [Google Scholar] [CrossRef]
  13. Chu, P.; Li, Z.; Zhang, K.; Lammers, K.; Lu, R. High-precision fruit localization using active laser-camera scanning: Robust laser line extraction for 2D-3D transformation. Smart Agric. Technol. 2024, 7, 100391. [Google Scholar] [CrossRef]
  14. Feature extraction of hyperspectral images for detecting immature green citrus fruit. Front. Agric. Sci. Eng. 2018, 5, 475–484. [CrossRef]
  15. Chandrasekaran, I.; Panigrahi, S.S.; Ravikanth, L.; Singh, C.B. Potential of Near-Infrared (NIR) Spectroscopy and Hyperspectral Imaging for Quality and Safety Assessment of Fruits: An Overview. Food Anal. Methods 2019, 12, 2438–2458. [Google Scholar] [CrossRef]
  16. Baietto, M.; Wilson, A.D. Electronic-Nose Applications for Fruit Identification, Ripeness and Quality Grading. Sensors 2015, 15, 899–931. [Google Scholar] [CrossRef] [PubMed]
  17. Sujatha, K.; Ponmagal, R.S.; Srividhya, V.; Godhavari, T. Feature extraction for ethylene gas measurement for ripening fruits. In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 3–5 March 2016; pp. 3804–3808. [Google Scholar]
  18. Yildiz, F.; Özdemir, A.T.; Uluışık, S. Evaluation Performance of Ultrasonic Testing on Fruit Quality Determination. J. Food Qual. 2019, 2019, 6810865. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Wang, S.; Ji, G.; Phillips, P. Fruit classification using computer vision and feedforward neural network. J. Food Eng. 2014, 143, 167–177. [Google Scholar] [CrossRef]
  20. Bhargava, A.; Bansal, A. Fruits and vegetables quality evaluation using computer vision: A review. J. King Saud Univ.—Comput. Inf. Sci. 2021, 33, 243–257. [Google Scholar] [CrossRef]
  21. Van De Looverbosch, T.; Rahman Bhuiyan, M.H.; Verboven, P.; Dierick, M.; Van Loo, D.; De Beenbouwer, J.; Sijbers, J.; Nicolaï, B. Nondestructive internal quality inspection of pear fruit by X-ray CT using machine learning. Food Control 2020, 113, 107170. [Google Scholar] [CrossRef]
  22. Matsui, T.; Kamata, T.; Koseki, S.; Koyama, K. Development of automatic detection model for stem-end rots of ‘Hass’ avocado fruit using X-ray imaging and image processing. Postharvest Biol. Technol. 2022, 192, 111996. [Google Scholar] [CrossRef]
  23. Filter Design for Optimal Feature Extraction from X-ray Images. Available online: https://elibrary.asabe.org/abstract.asp?aid=13353 (accessed on 20 June 2024).
  24. Fruit Quality Evaluation Using Electrical Impedance Spectroscopy. Available online: https://bia.unibz.it/esploro/outputs/doctoral/Fruit-Quality-Evaluation-Using-Electrical-Impedance/991006127184201241 (accessed on 20 June 2024).
  25. Gan, H.; Lee, W.S.; Alchanatis, V.; Abd-Elrahman, A. Active thermal imaging for immature citrus fruit detection. Biosyst. Eng. 2020, 198, 291–303. [Google Scholar] [CrossRef]
  26. Satone, M.; Diwakar, S.; Joshi, V. Automatic Bruise Detection in Fruits Using Thermal Images. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2017, 7, 727–732. [Google Scholar] [CrossRef] [PubMed]
  27. Bhole, V.; Kumar, A.; Bhatnagar, D. A Texture-Based Analysis and Classification of Fruits Using Digital and Thermal Images. In Proceedings of the ICT Analysis and Applications; Fong, S., Dey, N., Joshi, A., Eds.; Springer: Singapore, 2020; pp. 333–343. [Google Scholar]
  28. Caceres-Hernandez, D.; Gutierrez, R.; Kung, K.; Rodriguez, J.; Lao, O.; Contreras, K.; Jo, K.-H.; Sanchez-Galan, J.E. Recent advances in automatic feature detection and classification of fruits including with a special emphasis on Watermelon (Citrillus lanatus): A review. Neurocomputing 2023, 526, 62–79. [Google Scholar] [CrossRef]
  29. Ren, A.; Zahid, A.; Zoha, A.; Shah, S.A.; Imran, M.A.; Alomainy, A.; Abbasi, Q.H. Machine Learning Driven Approach Towards the Quality Assessment of Fresh Fruits Using Non-Invasive Sensing. IEEE Sens. J. 2020, 20, 2075–2083. [Google Scholar] [CrossRef]
  30. Abideen, A.Z.; Sundram, V.P.K.; Pyeman, J.; Othman, A.K.; Sorooshian, S. Food Supply Chain Transformation through Technology and Future Research Directions—A Systematic Review. Logistics 2021, 5, 83. [Google Scholar] [CrossRef]
  31. Teixeira, I.; Morais, R.; Sousa, J.J.; Cunha, A. Deep Learning Models for the Classification of Crops in Aerial Imagery: A Review. Agriculture 2023, 13, 965. [Google Scholar] [CrossRef]
  32. Zárate, V.; González, E.; Cáceres-Hernández, D. Fruit Detection and Classification Using Computer Vision and Machine Learning Techniques. In Proceedings of the 2023 IEEE 32nd International Symposium on Industrial Electronics (ISIE), Helsinki, Finland, 19–21 June 2023; pp. 1–6. [Google Scholar]
  33. Tian, Y.; Wu, W.; Lu, S.; Deng, H. Application of deep learning in fruit quality detection and grading. Food Sci. 2021, 42, 260. [Google Scholar] [CrossRef]
  34. Bobde, S.; Jaiswal, S.; Kulkarni, P.; Patil, O.; Khode, P.; Jha, R. Fruit Quality Recognition using Deep Learning Algorithm. In Proceedings of the 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), Pune, India, 29–30 October 2021; pp. 1–5. [Google Scholar]
Figure 1. Feature extraction methods.
Figure 1. Feature extraction methods.
Applsci 14 08243 g001
Figure 2. Proposed workflow for fruit classification and quality assessment using deep learning and traditional computer vision.
Figure 2. Proposed workflow for fruit classification and quality assessment using deep learning and traditional computer vision.
Applsci 14 08243 g002
Figure 3. Examples of labeled images from each category.
Figure 3. Examples of labeled images from each category.
Applsci 14 08243 g003
Figure 4. Multiple instances of the same image using different data augmentation operations.
Figure 4. Multiple instances of the same image using different data augmentation operations.
Applsci 14 08243 g004
Figure 5. Feature maps of all the convolutional layers of the custom CNN model.
Figure 5. Feature maps of all the convolutional layers of the custom CNN model.
Applsci 14 08243 g005
Figure 6. An enhanced view of a feature map, activations can be seen at damaged sections of the fruit.
Figure 6. An enhanced view of a feature map, activations can be seen at damaged sections of the fruit.
Applsci 14 08243 g006
Figure 7. Model history diagrams using accuracy and loss with 50 epochs.
Figure 7. Model history diagrams using accuracy and loss with 50 epochs.
Applsci 14 08243 g007
Figure 8. Model history diagram using accuracy and loss with 5 epochs.
Figure 8. Model history diagram using accuracy and loss with 5 epochs.
Applsci 14 08243 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zárate, V.; Hernández, D.C. Simplified Deep Learning for Accessible Fruit Quality Assessment in Small Agricultural Operations. Appl. Sci. 2024, 14, 8243. https://doi.org/10.3390/app14188243

AMA Style

Zárate V, Hernández DC. Simplified Deep Learning for Accessible Fruit Quality Assessment in Small Agricultural Operations. Applied Sciences. 2024; 14(18):8243. https://doi.org/10.3390/app14188243

Chicago/Turabian Style

Zárate, Víctor, and Danilo Cáceres Hernández. 2024. "Simplified Deep Learning for Accessible Fruit Quality Assessment in Small Agricultural Operations" Applied Sciences 14, no. 18: 8243. https://doi.org/10.3390/app14188243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop