Next Article in Journal
Development and Application of a Remote Monitoring System for Agricultural Machinery Operation in Conservation Tillage
Next Article in Special Issue
An Improved Multi-Objective Optimization Decision Method Using NSGA-III for a Bivariate Precision Fertilizer Applicator
Previous Article in Journal
Calcium Lignosulfonate Can Mitigate the Impact of Salt Stress on Growth, Physiological, and Yield Characteristics of Two Barley Cultivars (Hordeum vulgare L.)
Previous Article in Special Issue
Application of a Fractional Order Differential to the Hyperspectral Inversion of Soil Iron Oxide
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Real-Time Oil Palm Fruit Maturity System Using Supervised Classifiers Based on Feature Analysis

by
Meftah Salem M. Alfatni
1,
Siti Khairunniza-Bejo
2,3,4,*,
Mohammad Hamiruce B. Marhaban
5,
Osama M. Ben Saaed
1,
Aouache Mustapha
6 and
Abdul Rashid Mohamed Shariff
2,3,4
1
Libyan Authority for Scientific Research, Tripoli P.O. Box 80045, Libya
2
Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
3
Laboratory of Plantation System Technology and Mechanization (PSTM), Institute of Plantation Studies, Universiti Putra Malaysia, Serdang 43400, Malaysia
4
Smart Farming Technology Research Centre (SFTRC), Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
5
Centre for Control System and Signal Processing, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
6
Division Télécom, Centre de Développement des Technologies Avancées, CDTA, Baba-Hassen 16303, Algeria
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(9), 1461; https://doi.org/10.3390/agriculture12091461
Submission received: 22 July 2022 / Revised: 1 September 2022 / Accepted: 2 September 2022 / Published: 14 September 2022
(This article belongs to the Special Issue Digital Innovations in Agriculture)

Abstract

:
Remote sensing sensors-based image processing techniques have been widely applied in non-destructive quality inspection systems of agricultural crops. Image processing and analysis were performed with computer vision and external grading systems by general and standard steps, such as image acquisition, pre-processing and segmentation, extraction and classification of image characteristics. This paper describes the design and implementation of a real-time fresh fruit bunch (FFB) maturity classification system for palm oil based on unrestricted remote sensing (CCD camera sensor) and image processing techniques using five multivariate techniques (statistics, histograms, Gabor wavelets, GLCM and BGLAM) to extract fruit image characteristics and incorporate information on palm oil species classification FFB and maturity testing. To optimize the proposed solution in terms of performance reporting and processing time, supervised classifiers, such as support vector machine (SVM), K-nearest neighbor (KNN) and artificial neural network (ANN), were performed and evaluated via ROC and AUC measurements. The experimental results showed that the FFB classification system of non-destructive palm oil maturation in real time provided a significant result. Although the SVM classifier is generally a robust classifier, ANN has better performance due to the natural noise of the data. The highest precision was obtained on the basis of the ANN and BGLAM algorithms applied to the texture of the fruit. In particular, the robust image processing algorithm based on BGLAM feature extraction technology and the ANN classifier largely provided a high AUC test accuracy of over 93% and an image-processing time of 0,44 (s) for the detection of FFB palm oil species.

1. Introduction

Usually, information that can be obtained from a distance about objects or areas is a science called remote sensing [1]. Remote sensing is a procedure that can be used to measure the external physical properties of an area by receiving the energy reflected and emitted from the target area [2].
Further, an introduction and development of remote sensing was published by the authors of Refs. [3,4] using various sensors, image processing tools and techniques for remote sensing applications. In fact, the most common sensors used in remote sensing are cameras and solid-state scanners, such as CCD (charge coupled device) images, which are available in 2D matrices for the application, and satellite image sensors [5,6].
Bakker et al. [7] indicated that a digital CCD camera is an electro-optical remote sensor made of semiconductor material, which is the most common type of detection nowadays in the range from visible to very near IR; it is used to provide area information in aerial Earth observation application for low-cost imagery. Likewise, bi-directional reflectance measurements and the method for determining the suppression of atmospheric MTF for the CCD camera on board the Huanjing 1A (HJ-1A) satellite and the digital CCD camera were designed and studied by the authors of Refs. [8,9].
In addition, a standard digital camera used to automatically monitor snow cover with high accuracy in terms of time and location as a unified index based on red, green and blue (RGB) values was developed to control errors due to lighting effects [10]. On the other hand, microcomputers have increased power as an advantage of remote sensing image processing technology. Based on their study [11], researchers used the BRIAN micro CSIRO system as an example of how to access methods for image processing, which contributed to the acceptance of remote sensing technology.
Machine learning is an evolutionary area of algorithms, hardware and storage systems working in smarter ways for several applications, such as (a) abnormal behavior proactive detection for reasonable solutions in advance; (b) creating events models based on system training in order to forecast the values of a future inquiry; (c) testing the future inquiry based on the understating of the created event model and (d) computing the individual loss reserve [12]. Thus, different researchers have used the advantage of machine learning for automated wheat diseases classification, estimation of the long-term agricultural output and prediction of soil organic carbon and available phosphorus [13,14,15]. Therefore, there are many benefits and advantages to using machine learning methods in computing the individual loss reserve regarding ML techniques, making such methods more feasible, with more accurate pricing, claims triage, loss prevention, a deep dive in changes in loss reserves and frequent monitoring to calculate claims reserves on individual claims data (ICR) [16,17,18,19].
In addition, Refs. [20,21] reported that machine learning techniques provide the possibility to activate and control the classification of images by remote sensing. However, multi-band deep learning, deep convolutional neural networks and modular features were implemented using limited training samples by the authors of Refs. [22,23,24,25,26,27,28] to classify the hyperspectral, hyperparameter, spectroradiometer and spectrometer images as those remotely detected data.
As a result, a variety of fields have been successfully populated with numerous remotely sensed images with high spectral–spatiotemporal resolution in order to identify important acoustic processes in agricultural applications [29,30]. The old method of assessing the quality of agricultural products, in general, is tedious and expensive [31]. Traditional techniques have been in use for a long time, but they are extremely tedious, expensive and out of control over time. In this context, high-tech switches are needed to use machine vision to classify the quality of agricultural food products and to assess timely and accurately [32,33,34,35,36,37,38,39,40,41,42].
This technique is suitable for surface or sub-surface imaging due to the incomplete penetration depth of the interrogation source. In addition, researchers from all over the world have contributed and developed an automated internal classification system as a solution for screening agricultural crops based on internal characteristics, such as sugar, moisture and acid [43,44,45,46].
Equally, the authors of Refs. [47,48,49,50,51] applied remote sensing and image processing technology based on texture and color measurement methods to classify images in a different application. In this paper, the remote sensing (CCD camera) and image processing (Gabor waves, GLCM and BGLAM) sensor technologies as texture property extraction techniques are based on supervised machine learning classifiers (SVMs, KNN and ANN) so as not to distract the assets and to assess the FFB quality inspection on a real-time system.
This article will review in Section 2 the fruit ripeness classification aspect, and the discussion will expand on relevant works regarding methodologies and strategies for an automated FFB grading and sorting system using different approaches, including data collection, system material, image processing, classification system and other available tools for system evaluation and assessment. Next, Section 3 will entail the results and discussion, including obtained results by modeling the different feature algorithms with classification modules. Section 4 will be the conclusion and discuss the challenges and future direction.

2. Fruit Ripeness Classification

Currently, different computer vision systems have been invented and applied to assess the quality of agricultural crops based on different color spaces through machine learning techniques. The use of such systems for a variety of fruit ripening processes based on different color spaces and classification techniques resulted in varying research accuracy, as shown in Table 1.
As shown in Table 1, the largest generic classifier technologies are SVM, ANN, K-Means and KNN at 34%, 31%, 11% and 9%, respectively, whereas the most used color spaces in Table 1 are RGB, LAB, HSV, HIS and YCbCr with 57%, 31%, 14%, 9% and 6%, respectively, with high output resolution. Thus, RGB color space and SVM classifier are the most popular technologies that achieved higher resolution.
To increase the production of high-quality crude palm oil, one of the challenges is to harvest the fresh fruit bunches (FFB) of oil palm at the optimal stage of ripeness. Actually, the current methods used to determine the optimal ripened stage are based on color and loose fruits observation. This traditional method relies heavily on the undiscovered technique of palm fruit size experimentation and intuition to accurately determine ripeness that cannot be easily replicated and is subject to significant human error. To address this issue and find a systematic solution to determine the oil palm fruit ripeness that is cost-efficient, fast, non-invasive, reliable and precise, researchers contributed to developing a tech-based solution using computer vision that enables auto-grading and sorting of the optimal ripened stage by integrating software (image processing, robust datasets, AI decision-making) and hardware systems (lighting system, grading and sorting system). The advancement in methods and techniques for FFB classification and grading has resulted in the development of automated computer analysis, which will aid farmers significantly in obtaining good quality in crude palm oil production, particularly in rural areas with limited access to automation facilities.

2.1. Data Collection

According to confirmation between the scientific teams of the Universiti Putra Malaysia (UPM) and Palm Oil Board of Malaysia (MPOB), knowledge and experience were shared to study the properties and future of FFB palm oil at different stages of maturity to collect valuable information. Thus, the study began with a field visit, as shown in Figure 1.
The purpose of the visit was to select the study area and FFB types of oil palm according to the research needs. Accordingly, the preparation of the survey and verification of the methods and techniques for the palm oil fruit maturity grading system involved collecting 270 fruit images for each of the three types of palm oil fruit, which are (i) Nigrescens, (ii) Oleifera and (iii) Virescens, as shown in Figure 2. Each harvested fruit received a specific sheet containing its name, number, type and ripeness class. The data collection process for the oil palm system is as follows:
  • An expert in the classification of palm oil fruit maturity was appointed. The expert classified the fruits based on three grades, namely under-ripe, ripe and over-ripe;
  • A specified number of fruits per day were collected. The collection ranged from 15 to 20 fruits based on the ability of the lab capacity and the quantity available in the field;
  • Give the physical image of each fruit the name and number of the organization using the computer or during laboratory analysis;
  • Third item.

2.2. System Material

In general, the material and process of the FFB palm oil maturity classification system is shown in Figure 3. Accordingly, the fruit ripeness grading system used computer vision application in agricultural quality inspection to ensure ripeness category of fruit. The system includes: (a) a housing having an enclosure for scanning process; (b) defused tubes of LED illumination means with optical lens illumination filter provided at the enclosure of the housing; (c) preferably, a suitable charge coupled device (CCD) digital camera DFK 41BF02.H FireWire CCD color camera is used to capture fruit sample’s image, provided at top portion of the enclosure of the housing; (d) a feeding device for conveying fruit samples to the housing; (e) a processing unit to process and analyze the fruit sample image; (f) a data acquisition interface provided in between the camera and the processing unit and wherein the processing unit further provided with a disk top computational unit serves to transfer data to a computer. In fact, the fruit was obtained in real time with a controlled indoor lighting system.

2.3. Image Processing Approach

In general, image processing and analysis using computer vision and external file systems were performed with general and standard steps, as shown in Figure 4 [31,88,89]. Image acquisition and pre-processing include low-level processing, segmentation, representation and description as mid-level operations, while higher-level operations include object recognition and image classification.
As a result, the group of oil palm fruits went through fruit image processing stages based on various steps, as shown in Figure 5. The steps included fruit image acquisition, pre-processing and processing, treatment, segmentation and extraction of features as well as applying the retrieval methods and techniques as a decision-making system based on the similarity calculation as proposed in the future work. All images were related to the training model and a fresh fruit bunch was evaluated. The decision-making process was based on the training model.
Several experiments were performed with different models (color, texture and thorns) of the FFB palm oil classification system. The three different regions of interest (ROI1, ROI2, ROI3) were verified for the FFB maturity of the oil palm, as shown in Figure 6, using various feature extraction techniques (color feature extraction, such as mean, standard deviation and color histogram techniques) as well as texture extraction techniques (Gabor wavelet (GW), gray level co-occurrence matrix (GLCM) and basic gray level halo matrix (BGLAM)).

2.4. Classification System

Decision-making based on image classification through supervised machine learning classifiers is the last step in the process, which is a method of learning a set of rules from cases called a training set to create a classifier that can be used to create a great presentation using new cases for tests [90,91,92]. The classification system defines objects by classifying them in a limited set of categories [93,94,95]. As noted at the beginning of this article in Table 1, the most popular supervised classifiers in fruit categorization are SVM, ANN and KNN. These classifiers were used in this article for the experimental parameters.

2.4.1. Artificial Neural Network (ANN)

An artificial neural network (ANN) provides an efficient alternative for mapping complex nonlinear relationships between input and output datasets without the need for detailed knowledge of the underlying physical relationships [96]. ANNs contain connected nerve cells that mimic the work of the brain. ANN differs significantly from algorithm software due to its ability to disseminate knowledge about new data unearthed. Expert systems must collect real knowledge about the specific area. Multi-layered direct feedback neural networks are grouped into input, output and hidden layers and are used with the FFB oil palm classification system.
Each layer comprises several neurons, which are known as processing elements (PE), as illustrated in Figure 7 [67,97,98,99]. No pre-defined rules were needed to be set for ANN because it is able to learn and generalize from “experience” or a set of presented examples, which is called a training set. The number of optimum hidden neurons was determined experimentally from the training processes of the MLP classifiers. An in-depth description of the MLP concept was addressed by the authors of Ref. [100].
Figure 6 illustrates the construction of a three-layer MLP building. The general task of the PEs in the input layer of an MLP is to buffer the input signals to the PEs in the hidden layer. This step collects the products of input signals with their weighted connections by each PE.

Artificial Neural Network (ANN)

Varying the weights given to neural connections is a process of training a neural network to achieve a satisfactory result. The supervised learning procedure for multi-layered front-end power systems provides a recipe for changing the weight of elements in adjacent layers. This algorithm reduces the sum of squares errors, which have been identified as least squares.
The mean square errors (MSE) and the efficiency (EFF) of the training and testing for each classifier are calculated.
During the training phase, data were used to fit the system using the ANN model. Each category in the dataset was presented as an input sample for ANN–MLP for training assignments. In order to reduce the mean square error (MSE) between goals and outputs, a trial and error trial [100] was performed. Under-ripe, ripe and over-ripe were determined using the desired outputs as 0.5, 0 and 1, respectively, while the input characteristics were normalized within the range [0, 1]. Training effectiveness was used as an important indicator of the accuracy of rating evaluation. However, each method used different ANN constructs to result in inefficiency. The commonly used backpropagation networks were selected for the FFB classification system for oil palm trees due to their success with a variety of image processing applications in agriculture [101,102,103,104].

2.4.2. K-Nearest Neighbor (KNN)

KNN is another supervised classifier used in this work based on the concept that observations in a dataset are, in general, close to other observations with similar properties. Additionally, the metric distance and k-value play a major role in the KNN classification algorithm [105], although Ref. [106] notes that the KNN classifier is not a pre-classifier; KNN determines their location. kNN is used to query the new training space model based on the appropriate similarity distance scale.

KNN Performance

KNN regulation is one of the largest algorithms for classifying attractive patterns. In this work, different k-values and distance measurement methods were adapted to balance the trade-off of the FFB maturity classification by excluding values and methods having low confidence accuracy levels, as shown in Figure 8.
Moreover, an experimental investigation was carried out based on the values of K, which are 1, 3, 5, 7 and 9, as well as methods for measuring metric distance, namely: Euclidean, City, Cosine and Correlation, as in “Equations” by Refs. [105,107]. The research aims to determine the appropriate KNN classification coefficient for the high-precision FFB palm oil maturity classification system. The study showed that the appropriate distance measure that reduced the distance between two similar classified examples is the city-block metric. The value of k = 1 affects the performance of the KNN procedure. The results of the evaluation can be obtained next regarding applications in agriculture [101,102,103,104].

2.4.3. Support Vector Machine (SVM)

KNN SVM is a supervised machine learning classifier developed by the authors of Ref. [108] based on constructing hyper-plane as a decision line separating Class 1 from Class 2, as shown in Figure 9 [109]. A special characteristic of SVMs is that they simultaneously reduce experimental classification error and maximize geometric boundary by optimizing the superlative level of linear separation and converting the nonlinear data model into a linearly separable format in a feature space with high-dimensional [110].
In the FFB maturity classification, there are three different target classes (under-ripe, ripe and over-ripe) and one against all approach (OAA), which subdivides each class and merges all the others [111]. Due to the performance efficiency and less processing time than the multi-class SVM classifier, the OAA method was used to perform the FFB ripeness classification of oil palm.

SVM Performance

To improve the classification result for specific models, special classes of FFB palm oil had to be learned according to linear, non-linear and four-step basis. First, the input data comprise two sets of vectors in an n-dimensional space. SVM will build a separate hyperplane in that space that increases the “margin” between the two datasets. Second, when calculating the margin, we construct two parallel planes parallel, one on each side of the separator planes, that are “pushed up” for the two datasets. Third, instinctively, a fine separation is reached by means of the hyper-plane that has the largest distance to the data points adjacent to both classes. Finally, the classifier’s best generalization error will depend on the largest margin or distance between these parallel hyperplanes.
The parameter tuning is the most important factor in the SVM model-building process. In SVM, tests were accomplished with different kernel types, such as linear, polynomial and radial basis function kernels, to achieve the classification task. Furthermore, to control the trade-off between maximizing the margin and minimizing the training error, the sigma of RBF was tuned from 1 to 100 and the polynomial distance was also tuned from 1 to 4. The regularization parameter magnitude C was tuned from 1 to 1000 for both polynomial and RBF kernels.

2.5. Training and Testing

As with Kotsiantis, three techniques are used to estimate the accuracy of the classifier [105]. First is the cross-validation technique, by dividing the training set into subgroups of equal selection and size. For each subgroup, the classifier is trained on one of all other subgroups. The second is the leave-one-out validation. The third is the most common one, which is used in this work with the FFB palm oil grading system. Two-thirds of the data are for training and the remaining is for performance appraisal.
Numerous statistical measurements of efficiency and mean square error (MSE) were applied as indexes to validate the performance of the classifier. In particular, an automatic parameter tuning procedure as in Ref. [112] is implemented for the system to dynamic adaptive thresholding algorithm for the oil palm FFB ripeness grading. The objective of supervised learning is to create a concise model of the distribution of class labels in terms of predictor features.

Training and Testing Stage

The training stage includes data collection, data analysis and a training model analyzing 270 fruit samples of three different ripeness categories for the three different oil palm FFB types that were collected, analyzed and then a training model for fruit image type and ripeness classification was created. Meanwhile, the testing stage included testing the grading system initially in the lab. Testing the grading system in the field ensured that the system provided a high percentage of internal validity for findings obtained using the system design. Furthermore, 90 samples for each class were used to test the oil palm FFB ripeness grading system. Figure 10 illustrates the main approaches considered in the classification module for oil palm FFB types and ripeness.
In general, the classification of FFB type and ripeness of oil palm was successfully carried out based on the performance of three levels of image processing and subsequent analysis, as shown in Figure 11.

2.6. Classifier Performance Evaluation

The performance measurement of a classifier independently is conducted according to its sensitivity and specificity. The analysis of the ROC of a classifier is a solution to limit the empirical precision of binary classification. Results significantly greater than 50% could be due to a biased classifier tested on an unbalanced dataset, and overall precision does not differentiate between forms of error [113]. The experiments aimed to infer the crucial architecture with the selected color, texture and spine models using the ROC as a statistical measurement analysis. This analysis provides a quantitative assessment using AUC.

Receiver Operating Characteristic Curve

Figure 12 shows the ROC curve, which has become the standard tool for evaluating predictive accuracy to evaluate and compare models and prediction algorithms. ROC analysis offers a methodical analysis of the sensitivity and specificity of judgment [114,115].
Sensitivity is the capability of the classifier to recognize the positive pattern amongst the truly positive patterns. Specificity is the ability of the classifier to recognize the negative patterns amongst the truly negative patterns. Figure 12 shows that point (0,1) is the ideal classifier, which categorizes all the positive and negative cases appropriately. In this instance, the false positive rate is none (0), and the true positive rate is all (1). In addition, point (0, 0) indicates that the classifier predicted all the cases to be negative, while point (1, 1) matches a classifier with all the cases that are positive. Point (1,0) means the classifier fails to implement the correct classification for all the cases, as shown in Figure 9. The given n test samples are constructed according to the confusion matrix as illustrated in Table 2 that resulted from classification [113,115,116,117]. The calculation of accuracy, sensitivity or true positive rate (TPR) and 1- specificity or false positive rate (FPR) are given by ‘Equations (1)–(3)’, respectively.
A c c u a r c y = T P + T N n
T P R = S e n s i t i v i t y = T P T P + F N
F P R = S p e c i f i c i t y = T N T N + F P
where the true TP positives are the number of correctly classified maturities; true negatives TN is the number of incorrectly classified maturities; false positives FP is the number of maturities classified as non-maturities and false negative FN is the number of non-maturities classified as maturities. Finally, the performance evaluation of the oil palm FFB maturity classification system classifier typically includes the measurement of sensitivity and specificity as performance results based on the ROC curve and measurement of the area under the ROC curve (AUC).

3. Results and Discussion

The FFB characteristics of the oil palm (color, texture and thorns) were extracted using the algorithms of the color model, the texture model and the thorn model. Three different supervised machine learning techniques, ANN, KNN and SVM, were incorporated into the extracted features based on the three different models to make decisions regarding FFB type and maturity. Experiments were carried out on the classifiers to select the appropriate model for the FFB oil palm grading system and to ensure high-quality grading results. The best possible classification accuracy can be achieved by selecting the highest AUC measured from the ROC curve.

3.1. Classification Based on ANN–MLP

This section discusses MLP models as classifying FFB maturity of oil palm based on statistical color function, color histogram, Gabor wavelets, GLCM and BGLAM functionality. The different ANN models selected on the basis of the experimental results were performed with different feature extraction techniques implemented in the oil palm grading system, as shown in Figure 13. A comparison between the MSE and the effectiveness of the training results and test steps was performed to validate the parameters of the ANN supervised learning classifier, as shown in Table 3.
Table 3 indicates that the MLP and MSE learning stage of the learning procedures did not exceed 0.003. The higher proficiency score observed revealed a selection scale of the architectural MLP model [40 × 30 × 1], [25 × 10 × 1], [40 × 30 × 1], [45 × 22 × 1] and [40 × 20 × 1] over all the tracks with statistical color function, color histogram, GLCM, BGLAM and GW, respectively, for the FFB recording system. After several training sessions, the MLP model was able to learn and perfectly match the target in the training phase with extreme efficiency and with complete FFB palm oil training datasets. During the test phase, Figure 13a–e shows the classification of ROC plots performed by the FFB oil palm maturity classification system, with a higher AUC score observed in the MLP models.

3.2. Classification Based on KNN

The basic principle of the oil palm grading system based on nearest neighbor (NN) approximation is that two FFB images with similar color, texture and thorn features should reveal similar classes and grades. Thus, using the FFB images of similar ripeness is sensible when identifying the new FFB image. All images in the database can be grouped based on their ripeness features. The nearest neighbor technique is defined as dividing a sample set into categories, with each category holding similar samples that share the same features. The testing sample is determined by the known classifications of the training samples.
Based on the samples’ characteristics, five main steps were described to classify FFB images of oil palm into their categories (under-ripe, ripe and over-ripe). Indeed, choosing the best k-values and appropriate distance measurements ensures the accuracy of the results of the KNN classifier, which were usually chosen experimentally by static validation with a set of k-values and distance measurements. Thus, the best k-value that can be used with feature extraction techniques (statistical color feature, color histogram, Gabor wavelet, GLCM and BGLAM) was verified.
Figure 14 shows the ROC area for the best results performed by KNN with different values of k = 1, 3, 5, 7 and 9 and with different distance metrics, Euclidean, city-block, cosine and correlation, for the FFB oil palm maturity grading system with feature extraction techniques. Therefore, the experimental results show that k-value = 1 with the city-block distance technique provides the greatest AUC scores equal to 93.00%, 92.00%, 91.00%, 92.00% and 80% using feature extraction techniques, including statistical, color histogram, GLCM, BGLAM and Gabor wavelet, respectively, based on the KNN algorithm, as shown in Figure 14.

3.3. Classification Based on SVM

The SVM algorithm is implemented in the FFB maturity classification of oil palm, and the input data include three sets of vectors in the n-dimensional SVM space. These data create a discrete hyper-plane in this space, which increases the “margin” between the three datasets and reduces the expected generalization error. In the case of oil palm FFB ripeness grading, three target categories exist, namely under-ripe, ripe and over-ripe. In this case, OAA is used, in which each class is split out and all the other classes are merged in the oil palm FFB grading system to solve multiclass issues with less computation time. An important aspect of the SVM model-building process is parameter tuning.
Three different types of kernel functions, linear, polynomial and radial (RBF), were used to perform the classification task. To control the trade-off between maximizing the headroom and minimizing the training error, the sigma of RBF was set from 1 to 100, while the polynomial distance was also set from 1 to 4. The magnitude of the regularization parameter C was set from 1 to 1000 for polynomial kernels and RBF, as explained in Table 4.
As shown in Figure 15, the kernel function provided a significantly higher accuracy rate for the FFB maturity classification of oil palm. The results are based on different values of sigma and c, as examined by other research [111], and a comparison of linear and nonlinear polynomial kernel functions. Therefore, as demonstrated in Table 4, the experimental results show that RBF-sigma = 10 with C = 500 provides the greatest results of 92% using BGLAM with ROI3 based on the SVM algorithm, as shown in Figure 15.

3.4. Experimental Results

Four experiments were carried out. In experiment 1, the texture characteristics of the oil palm were extracted and the classification was performed for the FFB type classification. In experiments 2, 3 and 4, oil palm color, texture and thorn features were extracted. The classification was then conducted for the Nigrescens, Oleifera and Virescens FFB ripeness grading.
The complete picture of the threshold between the sensitivity and 1- specificity is displayed by plotting the ROC curve across a series of threshold points. The AUC is considered to be an effective measurement of the inherent validity of a grading system test. This curve is suitable for (a) assessing the discriminatory ability of a test to pick correctly the under-ripe, ripe and over-ripe classes; (b) finding the optimal threshold point to minimize class misclassification and (c) comparing the efficacy of ROI1, ROI2 and ROI3 for assessing the same sample or class, as illustrated in Figure 16.

3.4.1. FFB Type Grading System Results

The oil palm grading system was able to accurately classify the three different oil palm FFB types based on the external texture features and properties by using feature extraction techniques GLCM and BGLAM and supervised machine learning classifiers ANN, KNN and SVM, as critically explained in Table 5.
Table 5 indicates that the fastest and most accurate method and technique for the oil palm type grading system is the BGLAM feature extraction technique combined with the ANN supervised machine learning technique applied on pruning a 100 × 100-pixel FFB image with the ROI3. This finding achieved an optimal accuracy of 93.00% and an image processing speed of 0.44 s in the test performance.

3.4.2. FFB Ripeness Grading System Results

The maturity classification task was trained and tested for the three closest classes, over-ripe, ripe and under-ripe, based on the three FFB maturity models of oil palms: color, texture and thorns.

Color Model

The ripeness grading system testing performance based on the color model for different FFB image ROIs was evaluated. The results are clearly illustrated in Table 6.
Table 6 indicated the optimal methods and techniques that are the fastest and most accurate for the ripeness grading system. The data are based on the color histogram feature extracted combined with the ANN technique applied to the 100 × 100-pixel FFB image size with ROI3. The results achieved 93.00% accuracy and 1.6 s image processing speed in terms of testing performance for Nigrescens and Oleifera and 100%, 93% testing performance accuracy and 1.4 s image processing speed based on the ANN technique applied with ROI2. For Virescens, the statistical color feature accurately obtained 93% testing performance based on ANN for the different oil palm types. However, the results were limited by the slow processing time compared with the color histogram performance and the oil palm system objectives.

Texture Model

The ripeness grading system testing performance based on the texture model for different FFB image ROIs was evaluated. The results are clearly illustrated in Table 7.
As indicated in Table 7, the fast and accurate method and techniques used for the oil palm FFB ripeness grading system based on the texture model were primarily the BGLAM combined with the ANN technique. This technique was applied to the ROI3 with 92.00% testing performance accuracy with a 0.43 s image processing speed for Nigrescens. Moreover, the BGLAM combined with the ANN technique applied to the ROI2 achieved 93.00% testing performance accuracy with a significant image processing speed of 0.40 s for Oleifera and Virescens. Due to the sensitivity of SVM to noise and the weakness of the Gabor wavelet and GLCM techniques with texture features and processing time, the limitations of these methods and techniques are clearly stated in the testing result tables.

Thorn Model

The ripeness grading system performance of the oil palm FFB types for testing based on the thorn model for the different ROIs was evaluated. The results are clearly illustrated in Table 8.
Due to data noise, the thorn model shows poor results based on performance and processing time, while BGLAM combined with ANN technology with SVM technology applied to ROI3 achieved a test performance of 91.00% and an Oleifera image processing speed of 1.20 s.

4. Conclusions

An FFB fruit palm oil ripeness classification system was designed based on remote sensing sensors (CCD camera) and image processing technologies as computer vision applications for inspection of agricultural crop quality.
The system aims to ensure the maturity class of different types of FFB palm oil based on external characteristics, such as color, texture and thorns. Image processing methods and techniques, including the acquisition and segmentation of images in ROI1, ROI2 and ROI3 and the extraction of image properties as a function of the statistical function of the color, histogram color, GLCM, BGLAM and Gabor wavelet, were implemented.
In addition, decision-making for image classification through training and testing of the system based on the different algorithms, SVM, KNN and ANN, was implemented on a maturity classification system. The training and testing of oil palm FFB species (Nigrescens, Oleifera and Virescens) and maturation (under-ripe, ripe and over-ripe) depending on the color, texture and pattern of the thorns were extracted.
AUC and ROC were used to accurately estimate and evaluate the performance of different classifiers based on system performance, processing time and system cost. The results showed that the texture models were improved with ANN classifiers as the best result of the algorithm classifier, ANN-based BGLAM with ROI3, provides 93.00% accuracy with a shorter image processing time of 0.44 (s) for FFB type recognition. Meanwhile, the BGLAM algorithm that relies on ANN and ROI3 obtained 92.00% accuracy and a short processing time of 0.43 (s) for Nigrescens, plus the algorithm BGLAM based on ANN and ROI2 obtained 93.00% accuracy and a short processing time of 0.40 (s) for Oleifera and Virescens for maturity classification.
In the final analysis, different predictions were used. Maximum accuracy was obtained using an ANN classifier with the highest prediction accuracy observed compared to all the other classifiers. The following more accurate prediction is indicated by the different classifiers: KNN and SVM, respectively. The scope of the existing work is limited to investigation of the relationship between oil palm fruit ripeness level and image processing approach and AI.
As mentioned above, the authors have implemented several experiments based on different methods and techniques for automation of a real-time oil palm FFB ripeness grading system that carried out satisfactory results, but, in the future, the existing work can be extended to include some recommended practical actions and scientific studies of the system’s hardware and software for developing the current system and improving the results.
In terms of hardware development, proper hardware design and development make it easier for the programmer to set his algorithm for a high-accuracy performance result. (1) Since the illumination system is one of the most important hardware parts in the oil palm grading system, in order to control the lighting beam incident on FFB to be reflected to the camera, a linear polarizer (LP) filter should be fixed at the camera and each light source, and (2) it is important to use other types of sensors, such as a thermal camera, to collect valuable information about the oil palm FFB ripeness and build grading system models based on the obtained information.
Regarding software development, the real-time oil palm FFB ripeness grading system was implemented as a solution for effective oil palm FFB ripeness grading. However, in order to improve the oil palm system functionality and performance, different methods and techniques should be proposed based on the system software, such as (1) using the oil palm FFB internal feature lab analysis information, such as oil content and free fatty acid, to correlate with external features of FFB, such as color and texture features, to validate and support the oil palm FFB ripeness result; (2) applying the retrieval methods and techniques as a decision-making system based on the similarity calculation as proposed and illustrated in Figure 5; (3) further research is needed to generalize the system for other agriculture applications by considering the size, weight and shape of FFB during the system design. Hence, that assembles the system to be a multipurpose application system, which can be used in similar applications for different agricultural crops. Although the existing study utilizes image processing, similar results are expected to be obtained using a portable device. The proposed method has the potential to be a rapid on-site assessment tool for ripeness classification in the oil palm industry.

Author Contributions

Conceptualization, M.S.M.A., A.R.M.S., S.K.-B., M.H.B.M., O.M.B.S. and A.M.; methodology, M.S.M.A., A.R.M.S., S.K.-B., M.H.B.M., O.M.B.S. and A.M.; software, M.S.M.A. and A.R.M.S.; validation, M.S.M.A. and A.R.M.S.; formal analysis, M.S.M.A., A.R.M.S., M.H.B.M. and O.M.B.S.; investigation, M.S.M.A., A.R.M.S., S.K.-B., M.H.B.M., O.M.B.S. and A.M.; data curation, M.S.M.A., A.R.M.S. and O.M.B.S.; writing—original draft preparation, M.S.M.A. and A.R.M.S.; writing—review and editing, M.S.M.A., A.R.M.S., S.K.-B., M.H.B.M., O.M.B.S. and A.M.; supervision, A.R.M.S. and M.H.B.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science, Technology and Innovation Malaysia (MOSTI); grant titled “Development of an Oil Palm Fresh Fruit Bunches (FFB) Image Analyser” (Grant Number 5450426) is hereby acknowledged in supporting this research. Publication of this paper was supported by: Universiti Putra Malaysia Journal Publication Fund (JPF) administered by the Research Planning & Knowledge Management Division, Research Management Centre (RMC), Universiti Putra Malaysia.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data will be made available on request to the corresponding author’s email with appropriate justification.

Acknowledgments

The authors would like to thank the Libyan Embassy in Malaysia, University Putra Malaysia (UPM), Faculty of Engineering, Geospatial Information Science Research Centre (GISRC) and Department of Biological and Agricultural Engineering for providing support, infrastructure and laboratory facilities. The authors thank Research Station—Kluang, Malaysian Palm Oil Board (MPOB), Sime Darby Plantation Sdn. Bhd, Agriculture Park UPM (Taman Pertanian Universiti) and Spatial Research Group, UPM, for assistance.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NOAA. What Is Remote Sensing? National Ocean Service Website. 25 June 2018. Available online: https://oceanservice.noaa.gov/facts/remotesensing.html (accessed on 26 February 2021).
  2. USGS. What Is Remote Sensing and What Is It Used for? Mapping, Remote Sensing, and Geospatial Data. 18 August 2016. Available online: https://www.usgs.gov/faqs/what-remote-sensing-and-what-it-used?qt-_news_science_products=7&qt-news_science_products=7#qt-news_science_products (accessed on 30 September 2019).
  3. Cracknell, A.; Hayes, L. Introduction to remote sensing. In Geocarto International. 40; Taylor & Francis: London, UK, 2008; Volume 7. [Google Scholar]
  4. Cracknell, A.P. The development of remote sensing in the last 40 years. Int. J. Remote Sens. 2018, 39, 8387–8427. [Google Scholar] [CrossRef]
  5. Murai, S. Remote Sensing Notes; Sensing, J.A.o.R., Ed.; National Space Development Agency of Japan (NASDA): Tokyo, Japan; Remote Sensing Technology Center of Japan (RESTEC): Tokyo, Japan, 1999. [Google Scholar]
  6. Richards, J.A. Remote Sensing Digital Image Analysis—An Introduction, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  7. Tempfli, K.; Huurneman, G.C.; Bakker, W.H.; Janssen, L.L.F.; Feringa, W.F.; Gieske, A.S.M.; Grabmaier, K.A.; Hecker, C.A.; Horn, J.A.; Kerle, N.; et al. Principles of Remote Sensing-An Introductory Textbook, 4th ed.; Janssen, L.L.F., Huurneman, G.C., Eds.; The International Institute for Aerospace Survey and Earth Sciences (ITC): Enschede, The Netherlands, 1999; Volume 2, p. 591. [Google Scholar]
  8. Li, X.; Gu, X.; Yu, T.; Cheng, T.; Li, J.; Gao, H.; Wang, Z. Atmospheric scattering and turbulence modulation transfer function for CCD cameras on CBERS-02b and HJ-1A/1B. Int. J. Remote Sens. 2012, 33, 1413–1427. [Google Scholar] [CrossRef]
  9. Demircan, A.; Geiger, B.; Radke, M.; Von Schönermark, M. Bi-directional reflectance measurements with the CCD line camera WAAC. Remote Sens. Rev. 2009, 19, 95–110. [Google Scholar] [CrossRef]
  10. Hinkler, J.; Pedersen, S.B.; Rasch, M.; Hansen, B.U. Automatic snow cover monitoring at high temporal and spatial resolution, using images taken by a standard digital camera. Int. J. Remote Sens. 2010, 23, 4669–4682. [Google Scholar] [CrossRef]
  11. Harrison, B.A.; Jupp, D.L.B.; Hutton, P.G.; Mayo, K.K. Accessing remote sensing technology The microBRIAN example. Int. J. Remote Sens. 2007, 10, 301–309. [Google Scholar] [CrossRef]
  12. Kherwa, P.; Ahmed, S.; Berry, P.; Khurana, S.; Singh, S.; Sen, J.; Mehtab, S.; Cadotte, D.W.W.; Anderson, D.W.; Ost, K.J.; et al. Machine Learning Algorithms, Models and Applications. In Artificial Intelligence; Sen, J., Ed.; IntechOpen: London, UK, 2022; p. 155. [Google Scholar]
  13. Khan, H.; Haq, I.U.; Munsif, M.; Khan, S.U.; Lee, M.Y. Automated Wheat Diseases Classification Framework Using Advanced Machine Learning Technique. Agriculture 2022, 12, 1226. [Google Scholar] [CrossRef]
  14. Kuan, C.-H.; Leu, Y.; Lin, W.-S.; Lee, C.-P. The Estimation of the Long-Term Agricultural Output with a Robust Machine Learning Prediction Model. Agriculture 2022, 12, 1075. [Google Scholar] [CrossRef]
  15. Kaya, F.; Keshavarzi, A.; Francaviglia, R.; Kaplan, G.; Başayiğit, L.; Dedeoğlu, M. Assessing Machine Learning-Based Prediction under Different Agricultural Practices for Digital Mapping of Soil Organic Carbon and Available Phosphorus. Agriculture 2022, 12, 1062. [Google Scholar] [CrossRef]
  16. Wüthrich, M.V. Machine learning in individual claims reserving. Scand. Actuar. J. 2018, 2, 465–480. [Google Scholar] [CrossRef]
  17. Qiu, D. Individual Claims Reserving: Using Machine Learning Methods. In Mathematics and Statistics; Concordia University: Montreal, QC, Canada, 2019; p. 90. [Google Scholar]
  18. Härkönen, V. On Claims Reserving with Machine Learning Techniques. In Mathematical Statistics; Stockholms Universitet: Stockholm, Sweden, 2021; p. 69. [Google Scholar]
  19. Liu, X.; He, L.; He, Z.; Wei, Y. Estimation of Broadleaf Tree Canopy Height of Wolong Nature Reserve Based on InSAR and Machine Learning Methods. Forests 2022, 13, 1282. [Google Scholar] [CrossRef]
  20. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  21. Dawid, L.; Tomza, M.; Dawid, A. Estimation of Usable Area of Flat-Roof Residential Buildings Using Topographic Data with Machine Learning Methods. Remote Sens. 2019, 11, 2382. [Google Scholar] [CrossRef]
  22. Zhao, W.; Guo, Z.; Yue, J.; Zhang, X.; Luo, L. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. Int. J. Remote Sens. 2015, 36, 3368–3379. [Google Scholar] [CrossRef]
  23. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  24. Zhao, W.; Li, S.; Li, A.; Zhang, B.; Li, Y. Hyperspectral images classification with convolutional neural network and textural feature using limited training samples. Remote Sens. Lett. 2019, 10, 449–458. [Google Scholar] [CrossRef]
  25. Goh, J.Q.; Shariff, A.R.M.; Nawi, N.M. Application of Optical Spectrometer to Determine Maturity Level of Oil Palm Fresh Fruit Bunches Based on Analysis of the Front Equatorial, Front Basil, Back Equatorial, Back Basil and Apical Parts of the Oil Palm Bunches. Agriculture 2021, 11, 1179. [Google Scholar] [CrossRef]
  26. Pérez-Pérez, B.D.; Vázquez, J.P.G.; Salomón-Torres, R. Evaluation of Convolutional Neural Networks’ Hyperparameters with Transfer Learning to Determine Sorting of Ripe Medjool Dates. Agriculture 2021, 11, 115. [Google Scholar] [CrossRef]
  27. Mesa, A.R.; Chiang, J.Y. Multi-Input Deep Learning Model with RGB and Hyperspectral Imaging for Banana Grading. Agriculture 2021, 11, 687. [Google Scholar] [CrossRef]
  28. Zhao, F.; Yang, G.; Yang, H.; Long, H.; Xu, W.; Zhu, Y.; Meng, Y.; Han, S.; Liu, M. A Method for Prediction of Winter Wheat Maturity Date Based on MODIS Time Series and Accumulated Temperature. Agriculture 2022, 12, 945. [Google Scholar] [CrossRef]
  29. Hufkens, K.; Melaas, E.K.; Mann, M.L.; Foster, T.; Ceballos, F.; Robles, M.; Kramer, B. Monitoring crop phenology using a smartphone based near-surface remote sensing approach. Agric. For. Meteorol. 2019, 265, 327–337. [Google Scholar] [CrossRef]
  30. Zhong, Y.; Ma, A.; Ong, Y.S.; Zhu, Z.; Zhang, L. Computational intelligence in optical remote sensing image processing. Appl. Soft Comput. 2018, 64, 75–93. [Google Scholar] [CrossRef]
  31. Alfatni, M.S.; Shariff, A.R.M.; Abdullah, M.Z.; Saeed, O.; Ceesay, O.M. Recent Methods and Techniques of External Grading Systems for Agricultural Crops Quality Inspection—Review. Int. J. Food Eng. 2011, 7, 1–40. [Google Scholar] [CrossRef]
  32. Malamasa, E.N.; Petrakis, E.G.M.; Zervakis, M.; Petit, L.; Legat, J.D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  33. Pamornnak, B.; Limsiroratana, S.; Khaorapapong, T.; Chongcheawchamnan, M.; Ruckelshausen, A. An automatic and rapid system for grading palm bunch using a Kinect camera. Comput. Electron. Agric. 2017, 143, 227–237. [Google Scholar] [CrossRef]
  34. Prakasa, E.; Rosiyadi, D.; Ni’mah, D.F.I. Automatic Region-of-Interest Selection for Corn Seed Grading. In Proceedings of the International Conference on Computer, Control, Informatics and its Applications (IC3INA), Jakarta, Indonesia, 23–26 October 2017; pp. 23–28. [Google Scholar]
  35. López, Y.Y.; Martínez-García, A.; Gómez, S.J. Apple quality study using fringe projection and colorimetry techniques. Opt.—Int. J. Light Electron Opt. 2017, 147, 401–413. [Google Scholar] [CrossRef]
  36. Tretola, M.; Ottoboni, M.; Di Rosa, A.R.; Giromini, C.; Fusi, E.; Rebucci, R.; Leone, F.; Dell’Orto, V.; Chiofalo, V.; Pinotti, L. Former Food Products Safety Evaluation: Computer Vision as an Innovative Approach for the Packaging Remnants Detection. J. Food Qual. 2017, 2017, 1–6. [Google Scholar] [CrossRef]
  37. Sabri, N.; Ibrahim, Z.; Syahlan, S.; Jamil, N.; Mangshor, N.N.A. Palm Oil Fresh Fruit Bunch Ripeness Grading Identification Using Color Features. J. Fundam. Appl. Sci. 2017, 9, 563–579. [Google Scholar] [CrossRef]
  38. Khoje, S.A.; Bodhe, S.K. A Comprehensive Survey of Fruit Grading Systems for Tropical Fruits of Maharashtra. J. Crit. Rev. Food Sci. Nutr. 2015, 55, 1658–1671. [Google Scholar] [CrossRef] [PubMed]
  39. Beek, J.V.; Tits, L.; Somers, B.; Deckers, T.; Verjans, W.; Bylemans, D.; Janssens, P.; Coppin, P. Temporal Dependency of Yield and Quality Estimation through Spectral Vegetation Indices in Pear Orchards. Remote Sens. 2015, 7, 9886–9903. [Google Scholar] [CrossRef]
  40. Wang, P.; Niu, T.; He, D. Tomato Young Fruits Detection Method under Near Color Background Based on Improved Faster R-CNN with Attention Mechanism. Agriculture 2021, 11, 1059. [Google Scholar] [CrossRef]
  41. Plasquy, E.; Garcia, J.M.; Florido, M.C.; Sola-Guirado, R.R. Estimation of the Cooling Rate of Six Olive Cultivars Using Thermal Imaging. Agriculture 2021, 11, 164. [Google Scholar] [CrossRef]
  42. J Bird, J.; Barnes, C.M.; Manso, L.J.; Ekárt, A.; Faria, D.R. Fruit quality and defect image classification with conditional GAN data augmentation. Sci. Hortic. 2022, 293, 110684. [Google Scholar] [CrossRef]
  43. Leemans, V.; Destain, M.-F. A real-time grading method of apples based on features extracted from defects. J. Food Eng. 2004, 61, 83–89. [Google Scholar] [CrossRef]
  44. Njoroge, J.B.; Ninomiya, K.; Kondo, N.; Toita, H. Automated Fruit Grading System using Image Processing. In Proceedings of the 41st SICE Annual Conference. SICE 2002, Osaks, Japan, 5–7 August 2002; pp. 1346–1351. [Google Scholar]
  45. Thang, Y.M.; A Ariffin, A.; Appleton, D.R.; Asis, A.J.; Mokhtar, M.N.; Yunus, R. Determination of sugars composition in abscission zone of oil palm fruit. Ser. Mater. Sci. Eng. 2017, 206, 12034. [Google Scholar] [CrossRef]
  46. Xuan, G.; Gao, C.; Shao, Y. Spectral and image analysis of hyperspectral data for internal and external quality assessment of peach fruit. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 272, 121016. [Google Scholar] [CrossRef] [PubMed]
  47. Chuah, H.T.; Kam, S.W.; Chye, Y.H. Microwave dielectric properties of rubber and oil palm leaf samples: Measurement and modelling, International Journal of Remote Sensing. Int. J. Remote Sens. 1997, 18, 2623–2639. [Google Scholar] [CrossRef]
  48. Tan, K.P.; Kanniah, K.D.; Cracknell, A.P. On the upstream inputs into the MODIS primary productivity products using biometric data from oil palm plantations. Int. J. Remote Sens. 2014, 35, 2215–2246. [Google Scholar] [CrossRef]
  49. Hamsa, C.S.; Kanniah, K.D.; Muharam, F.M.; Idris, N.H.; Abdullah, Z.; Mohamed, L. Textural measures for estimating oil palm age. Int. J. Remote Sens. 2019, 40, 7516–7537. [Google Scholar] [CrossRef]
  50. Yu, H.; Yang, W.; Xia, G.-S.; Liu, G. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification. Remote Sens. 2016, 8, 259. [Google Scholar]
  51. De-la-Torre, M.; Zatarain, O.; Avila-George, H.; Muñoz, M.; Oblitas, J.; Lozada, R.; Mejía, J.; Castro, W. Multivariate Analysis and Machine Learning for Ripeness Classification of Cape Gooseberry Fruits. Processes 2019, 7, 928. [Google Scholar]
  52. Xiaobo, Z.; Jiewen, Z.; Yanxiao, L. Apple color grading based on organization feature parameters. Pattern Recognit. Lett. 2007, 28, 2046–2053. [Google Scholar] [CrossRef]
  53. Cárdenas-Pérez, S.; Chanona-Pérez, J.; Méndez-Méndez, J.V.; Calderón-Domínguez, G.; López-Santiago, R.; Perea-Flores, M.J.; Arzate-Vázquez, I. Evaluation of the ripening stages of apple (Golden Delicious) by means of computer vision system. Biosyst. Eng. 2017, 159, 46–58. [Google Scholar] [CrossRef]
  54. Bhargava, A.; Bansal, A. Machine learning based quality evaluation of mono-colored apples. Multimed. Tools Appl. 2020, 79, 22989–23006. [Google Scholar] [CrossRef]
  55. Wu, L.; Zhang, H.; Chen, R.; Yi, J. Fruit Classification using Convolutional Neural Network via Adjust Parameter and Data Enhancement. In Proceedings of the 12th International Conference on Advanced Computational Intelligence (ICACI), Dali, China, 14–16 March 2020. [Google Scholar]
  56. Behera, S.K.; Rath, A.K.; Sethy, P.K. Maturity status classification of papaya fruits based on machine learning and transfer learning approach. Inf. Processing Agric. 2020, 8, 244–250. [Google Scholar] [CrossRef]
  57. Guerrero, E.R.; Benavides, G.M. Automated system for classifying Hass avocados based on image processing techniques. In Proceedings of the 2014 IEEE Colombian Conference on Communications and Computing (COLCOM), Bogota, Colombia, 4–6 June 2014. [Google Scholar]
  58. Khisanudin, I.S. Dragon Fruit Maturity Detection Based-HSV Space Color Using Naive Bayes Classifier Method. IOP Conf. Series Mater. Sci. Eng. 2020, 771, 1–6. [Google Scholar] [CrossRef]
  59. Mendoza, F.; Aguilera, J.M. Application of Image Analysis for Classification of Ripening Bananas. J. Food Sci. 2004, 69, 471–477. [Google Scholar] [CrossRef]
  60. Paulraj, M.; Hema, C.R.; Sofiah, S.; Radzi, M. Color Recognition Algorithm using a Neural Network Model in Determining the Ripeness of a Banana. In Proceedings of the International Conference on Man-Machine Systems (ICoMMS), Batu Ferringhi, Penang, Malaysia, 11–13 October 2009; pp. 2B71–2B74. [Google Scholar]
  61. Li, H.; Lee, W.S.; Wang, K. Identifying blueberry fruit of different growth stages using natural outdoor color images. Comput. Electron. Agric. 2014, 106, 91–101. [Google Scholar] [CrossRef]
  62. Pourdarbani, R.; Ghassemzadeh, H.R.; Seyedarabi, H.; Nahand, F.Z.; Vahed, M.M. Study on an automatic sorting system for Date fruits. J. Saudi Soc. Agric. Sci. 2015, 14, 83–90. [Google Scholar] [CrossRef]
  63. Damiri, D.J.; Slamet, C. Application of Image Processing and Artificial Neural Networks to Identify Ripeness and Maturity of the Lime (citrus medica). Int. J. Basic Appl. Sci. 2012, 1, 171–179. [Google Scholar] [CrossRef]
  64. Nandi, C.S.; Tudu, B.; Koley, C. A Machine Vision-Based Maturity Prediction System for Sorting of Harvested Mangoes. IEEE Trans. Instrum. Meas. 2014, 63, 1722–1730. [Google Scholar] [CrossRef]
  65. Vélez-Rivera, N.; Blasco, J.; Chanona-Pérez, J.; Calderón-Domínguez, G.; Perea, M.D.J.; Arzate-Vázquez, I.; Cubero, S.; Farrera-Rebollo, R. Computer Vision System Applied to Classification of “Manila” Mangoes During Ripening Process. Food Bioprocess Technol. 2014, 7, 1183–1194. [Google Scholar] [CrossRef]
  66. Zheng, H.; Lu, H. A least-squares support vector machine (LS-SVM) based on fractal analysis and CIELab parameters for the detection of browning degree on mango (Mangifera indica L.). Comput. Electron. Agric. 2012, 83, 47–51. [Google Scholar] [CrossRef]
  67. Fadilah, N.; Mohamad-Saleh, J.; Halim, Z.A.; Ibrahim, H.; Ali, S.S.S. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch. Sensors 2012, 12, 14179–14195. [Google Scholar] [CrossRef] [PubMed]
  68. Elhariri, E.; El-Bendary, N.; Hussein, A.M.; Hassanien, A.E.; Badr, A. Bell pepper ripeness classification based on support vector machine. In Proceedings of the in 2nd International Conference on Engineering and Technology, Cairo, Egypt, 19–21 August 2014. [Google Scholar]
  69. Rahman, M.O.; Hussain, A.; Basri, H. Automated sorting of recycled paper using smart image processing. At-Automatisierungstechnik 2020, 68, 277–293. [Google Scholar] [CrossRef]
  70. Mohammadi, V.; Kheiralipour, K.; Ghasemi-Varnamkhasti, M. Detecting maturity of persimmon fruit based on image processing technique. Sci. Hortic. 2015, 184, 123–128. [Google Scholar] [CrossRef]
  71. El-Bendary, N.; El Hariri, E.; Hassanien, A.E.; Badr, A. Using machine learning techniques for evaluating tomato ripeness. Expert Syst. Appl. 2015, 42, 1892–1905. [Google Scholar] [CrossRef]
  72. Goel, N.; Sehgal, P. Fuzzy classification of pre-harvest tomatoes for ripeness estimation–An approach based on automatic rule learning using decision tree. Appl. Soft Comput. 2015, 36, 45–56. [Google Scholar] [CrossRef]
  73. Polder, G.; der Heijden, G.v. Measuring ripening of tomatoes using imaging spectrometry. In Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010; pp. 369–402. [Google Scholar]
  74. Rafiq, A.; Makroo, H.A.; Hazarika, M.K. Neural Network-Based Image Analysis for Evaluation of Quality Attributes of Agricultural Produce. Food Processing Preserv. 2016, 40, 1010–1019. [Google Scholar] [CrossRef]
  75. Ashraf, T.; NiazKhan, Y. Weed density classification in rice crop using computer vision. Comput. Electron. Agric. 2020, 175, 105590. [Google Scholar] [CrossRef]
  76. Abdulhamid, U.F.; Aminu, M.A.; Daniel, S. Detection of Soya Beans Ripeness Using Image Processing Techniques and Artificial Neural Network. Asian J. Phys. Chem. Sci. 2018, 5, 1–9. [Google Scholar] [CrossRef]
  77. Hadfi, I.H.; Yusoh, Z.I.M. Banana ripeness detection and servings recommendation system using artificial intelligence techniques. J. Telecommun. Electron. Comput. Eng. 2018, 10, 83–87. [Google Scholar]
  78. Nagvanshi, S.; Goswami, T.K. Development of a system to measure color in fresh and microwave dried banana slices. J. Food Sci. Technol. 2020, 41, 1673–1681. [Google Scholar]
  79. Rizam, S.; Yasmin, F.; Ihsan, A.; Shazana, K. Non-destructive Watermelon Ripeness Determination Using Image Processing and Artificial Neural Network (ANN). Int. J. Comput. Inf. Eng. 2009, 3, 332–336. [Google Scholar]
  80. Abdullah, N.E.; Madzhi, N.K.; Yahya, A.M.A.A.; Rahim, A.A.A.; Rosli, A.D. Diagnostic System for Various Grades of Yellow Flesh Watermelon based on the Visible light and NIR properties. In Proceedings of the 4th International Conference on Electrical, Electronics and System Engineering (ICEESE), Kuala Lumpur, Malaysia, 8–9 November 2018. [Google Scholar]
  81. Syazwan, N.A.; Rizam, M.S.B.S.; Nooritawati, M.T. Categorization of watermelon maturity level based on rind features. Procedia Eng. 2012, 41, 1398–1404. [Google Scholar] [CrossRef]
  82. Skolik, P.; Morais, C.L.M.; Martin, F.L.; McAinsh, M.R. Determination of developmental and ripening stages of whole tomato fruit using portable infrared spectroscopy and Chemometrics. BMC Plant Biol. 2019, 19, 236. [Google Scholar] [CrossRef] [PubMed]
  83. Du, D.; Wang, J.; Wang, B.; Zhu, L.; Hong, X. Ripeness Prediction of Postharvest Kiwifruit Using a MOS E-Nose Combined with Chemometrics. Sensors 2019, 19, 419. [Google Scholar] [CrossRef] [PubMed]
  84. Ramos, P.J.; Avendaño, J.; Prieto, F.A. Measurement of the ripening rate on coffee branches by using 3d images in outdoor environments. Comput. Ind. 2018, 99, 83–95. [Google Scholar] [CrossRef]
  85. Costa, A.G.; De Sousa, D.A.G.; Paes, J.L.; Cunha, J.P.B.; De Oliveira, M.V.M. Classification of Robusta Coffee Fruits at Different Maturation Stages Using Colorimetric Characteristics. Eng. Agrícola Jaboticabal 2020, 40, 518–525. [Google Scholar] [CrossRef]
  86. Castro, W.; Oblitas, J.; De-La-Torre, M.; Cotrina, C.; Bazan, K.; Avila-George, H. Classification of Cape Gooseberry Fruit According to its Level of Ripeness Using Machine Learning Techniques and Different Color Spaces. IEEE Access 2019, 7, 27389–27400. [Google Scholar] [CrossRef]
  87. De-la-Torre, M.; Avila-George, H.; Oblitas, J.; Castro, W. Selection and Fusion of Color Channels for Ripeness Classification of Cape Gooseberry Fruits. In Trends and Applications in Software Engineering; Part of the Advances in Intelligent Systems and Computing Book Series; Mejia, J., Muñoz, M., Rocha, Á., Calvo-Manzano, A.J., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 1071, pp. 219–233. [Google Scholar]
  88. Brosnan, T.; Sun, D.-W. Improving quality inspection of food products by computer vision—A review. J. Food Eng. 2004, 61, 3–16. [Google Scholar] [CrossRef]
  89. Chithra, P.L.; Henila, M. Defect Identification in the Fruit Apple Using K-Means Color Image Segmentation Algorithm. Int. J. Adv. Res. Comput. Sci. 2017, 8, 381–388. [Google Scholar] [CrossRef]
  90. Riese, F.M.; Keller, S.; Hinz, S. Supervised and Semi-Supervised Self-Organizing Maps for Regression and Classification Focusing on Hyperspectral Data. Remote Sens. 2020, 12, 7. [Google Scholar] [CrossRef]
  91. Alfatni, M.S.M.; Shariff, A.R.M.; Abdullah, M.Z.; Marhaban, M.H.; Shafie, S.B.; Bamiruddin, M.D.; Saaed, O.M.B. Oil palm fresh fruit bunch ripeness classification based on rule-based expert system of ROI image processing technique results. IOP Conf. Ser. Earth Environ. Sci. 2014, 20, 12018. [Google Scholar] [CrossRef]
  92. Guo, R.; Liu, J.; Li, N.; Liu, S.; Chen, F.; Cheng, B.; Duan, J.; Li, X.; Ma, C. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks. ISPRS Int. J. Geo-Inf. 2018, 7, 110. [Google Scholar] [CrossRef]
  93. Dimililer, K.; Bush, I.J. Automated Classification of Fruits: Pawpaw Fruit as a Case Study. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2017; pp. 365–374. [Google Scholar]
  94. Mekhalfa, F.; Nacereddine, N. Gentle Adaboost algorithm for weld defect classification. In Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA); IEEE: Poznan, Poland, 2017. [Google Scholar]
  95. Iqbal, S.M.; Gopal, A.; Sankaranarayanan, P.; Nair, A.B. Classification of Selected Citrus Fruits Based on Color Using Machine Vision System. Int. J. Food Prop. 2016, 19, 272–288. [Google Scholar] [CrossRef]
  96. Sudheer, K.P.; Gowda, P.; Chaubey, I.; Howell, T. Artificial Neural Network Approach for Mapping Contrasting Tillage Practices. Remote Sens. 2010, 2, 579–590. [Google Scholar] [CrossRef]
  97. Yang, C.-C.; Prasher, S.O.; Landry, J.-A.; Ramaswamy, H.S.; Ditommaso, A. Application of artificial neural networks in image recognition and classification of crop and weeds. Can. Agric. Eng. 2000, 42, 147–152. [Google Scholar]
  98. Unay, D. Multispectral Image Processing and Pattern Recognition Techniques for Quality Inspection of Apple Fruits. In Facult’e Polytechnique de Mons in Applied Sciences; Facult’e Polytechnique de Mons: Mons, Belgium, 2006; p. 159. [Google Scholar]
  99. Ranjbarardestani, M. Determining the ripeness of fruit juices based on image processing technology and neural network classification. Eur. Online J. Nat. Soc. Sci. 2016, 5, 846–850. [Google Scholar]
  100. Rafiq, M.Y.; Bugmann, G.; Easterbrook, D.J. Neural Network Design for Enginnering Applications. Comput. Struct. 2001, 79, 1541–1552. [Google Scholar] [CrossRef]
  101. Deck, S.H.; Morrow, C.T.; Heinemann, P.H.; Iii, H.J.S. Comparison of a neural network and traditional classifier for machine vision inspection of potatoes. Appl. Eng. Agric. 1995, 11, 319–326. [Google Scholar] [CrossRef]
  102. Schmoldt, D.L.; Li, P.; Abbott, A.L. Machine vision using artificial neural networks with local 3D neighbourhoods. Comput. Electron. Agric. 1997, 16, 225–271. [Google Scholar] [CrossRef]
  103. Timmermans, A.J.M.; Hulzebosch, A.A. Computer vision system for on-line sorting of pot plants using an artificial neural network classifier. Comput. Electron. Agric. 1996, 15, 41–55. [Google Scholar] [CrossRef]
  104. Wang, D.; Dowell, F.E.; Lacey, R.E. Single wheat kernel color classification using neural networks. Trans. ASAE 1999, 42, 233–240. [Google Scholar] [CrossRef]
  105. Kotsiantis, S.B. Supervised Machine Learning: A Review of Classification Techniques. Informatica 2007, 31, 249–268. [Google Scholar]
  106. Khan, M.A.M. Fast Distance Metric Based Data Mining Techniques Using P-trees: K-Nearest-Neighbor Classification and k-Clustering. In Computer Science; North Dakota State University of Agriculture and Applied Science: Fargo, ND, USA, 2001; p. 67. [Google Scholar]
  107. Sudha, L.R.; Bhavani, R. Gait based Gender Identification using Statistical Pattern Classifiers. Int. J. Comput. Appl. 2012, 40, 30–35. [Google Scholar] [CrossRef]
  108. Vapnik, V.N. (Ed.) Statistical Learning Theory. Adaptive and Learning Systems for Signal Processing, Communications, and Control; Wiley: New York, NY, USA, 1998; p. 768. [Google Scholar]
  109. Nayak, J.; Naik, B.; Behera, H.S. A Comprehensive Survey on Support Vector Machine in Data Mining Tasks: Applications & Challenges. Int. J. Database Theory Appl. 2015, 8, 169–186. [Google Scholar]
  110. Arabameri, A.; Roy, J.; Saha, S.; Blaschke, T.; Ghorbanzadeh, O.; Bui, D.T. Application of Probabilistic and Machine Learning Models for Groundwater Potentiality Mapping in Damghan Sedimentary Plain, Iran. Remote Sens. 2019, 11, 3035. [Google Scholar] [CrossRef]
  111. Nashat, S.; Abdullah, M.Z. Multi-class colour inspection of baked foods featuring support vector machine and Wilk’s k analysis. J. Food Eng. 2010, 101, 370–380. [Google Scholar] [CrossRef]
  112. Zemmour, E.; Kurtser, P.; Edan, Y. Automatic Parameter Tuning for Adaptive Thresholding in Fruit Detection. Sensors 2019, 19, 2130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The binormal assumption on precision-recall curves. In Proceedings of the International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 4263–4266. [Google Scholar]
  114. Gönen, M. Receiver Operating Characteristic (ROC) Curves. In Proceedings of the SAS Users Group International 31( SUGI 31), San Francisco, CA, USA, 26–29 March 2006. [Google Scholar]
  115. Mustapha, A.; Hussain, A.; Samad, S.A. A new approach for noise reduction in spine radiograph images using a non-linear contrast adjustment scheme based adaptive factor. Sci. Res. Essays 2011, 6, 4246–4258. [Google Scholar]
  116. Brown, C.D.; Davis, H.T. Receiver operating characteristics curves and related decision measures: A tutorial. Chemom. Intell. Lab. Syst. 2006, 80, 24–38. [Google Scholar] [CrossRef]
  117. Dimopoulos, T.; Bakas, N. Sensitivity Analysis of Machine Learning Models for the Mass Appraisal of Real Estate. Case Study of Residential Units in Nicosia, Cyprus. Remote Sens. 2019, 11, 3047. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Excursion to MPOB Kluang: (a) transport, (b) MPOB oil palm field, (c) research group, (d) harvest method, (e) FFB sample and (f) FFB fruitlet.
Figure 1. Excursion to MPOB Kluang: (a) transport, (b) MPOB oil palm field, (c) research group, (d) harvest method, (e) FFB sample and (f) FFB fruitlet.
Agriculture 12 01461 g001
Figure 2. FFB ripening classes of oil palm: (a) Nigarsens, (b) Olifera, (c) Varseness.
Figure 2. FFB ripening classes of oil palm: (a) Nigarsens, (b) Olifera, (c) Varseness.
Agriculture 12 01461 g002
Figure 3. Process of acquiring materials and images: (a) oil palm grading system, (b) lighting system, (c) captured image, (d) camera and RGB cable.
Figure 3. Process of acquiring materials and images: (a) oil palm grading system, (b) lighting system, (c) captured image, (d) camera and RGB cable.
Agriculture 12 01461 g003
Figure 4. The three levels of image processing algorithm for external grading system process.
Figure 4. The three levels of image processing algorithm for external grading system process.
Agriculture 12 01461 g004
Figure 5. Steps and image processing techniques for the FFB real-time oil palm ripeness classification system.
Figure 5. Steps and image processing techniques for the FFB real-time oil palm ripeness classification system.
Agriculture 12 01461 g005
Figure 6. Three different regions of interest (ROI1, ROI2, ROI3) for the FFB ripeness of oil palm.
Figure 6. Three different regions of interest (ROI1, ROI2, ROI3) for the FFB ripeness of oil palm.
Agriculture 12 01461 g006
Figure 7. ANN multi-layer structure directly operating with a single port of the FFB oil palm grading system.
Figure 7. ANN multi-layer structure directly operating with a single port of the FFB oil palm grading system.
Agriculture 12 01461 g007
Figure 8. K-nearest neighbor (KNN) example with k = 6.
Figure 8. K-nearest neighbor (KNN) example with k = 6.
Agriculture 12 01461 g008
Figure 9. Support vector machine (SVM) classification system.
Figure 9. Support vector machine (SVM) classification system.
Agriculture 12 01461 g009
Figure 10. Training and testing stages of the oil palm ripeness grading system: (a) training stage and (b) testing stage.
Figure 10. Training and testing stages of the oil palm ripeness grading system: (a) training stage and (b) testing stage.
Agriculture 12 01461 g010
Figure 11. Image processing algorithms levels for FFB ripeness classification.
Figure 11. Image processing algorithms levels for FFB ripeness classification.
Agriculture 12 01461 g011
Figure 12. (a) Graphs of sensibility versus threshold, specificity versus threshold and (b) ROC curve with a hypothetical example [114,115].
Figure 12. (a) Graphs of sensibility versus threshold, specificity versus threshold and (b) ROC curve with a hypothetical example [114,115].
Agriculture 12 01461 g012
Figure 13. AUC score of FFB maturity based on feature extraction and ANN: (a) statistical, (b) histogram, (c) GLCM, (d) BGLAM and (e) Gabor.
Figure 13. AUC score of FFB maturity based on feature extraction and ANN: (a) statistical, (b) histogram, (c) GLCM, (d) BGLAM and (e) Gabor.
Agriculture 12 01461 g013
Figure 14. K-value and distance metric for KNN and feature extraction techniques: (a) statistical, (b) histogram, (c) GLCM, (d) BGLAM and (e) Gabor.
Figure 14. K-value and distance metric for KNN and feature extraction techniques: (a) statistical, (b) histogram, (c) GLCM, (d) BGLAM and (e) Gabor.
Agriculture 12 01461 g014aAgriculture 12 01461 g014b
Figure 15. RBF parameters for SVM with feature extraction techniques: (a) statistical feature extraction, (b) histogram feature extraction, (c) GLCM feature extraction, (d) BGLAM feature extraction and (e) Gabor feature extraction.
Figure 15. RBF parameters for SVM with feature extraction techniques: (a) statistical feature extraction, (b) histogram feature extraction, (c) GLCM feature extraction, (d) BGLAM feature extraction and (e) Gabor feature extraction.
Agriculture 12 01461 g015aAgriculture 12 01461 g015b
Figure 16. Oil palm FFB type classification based on BGLAM and ANN classifier.
Figure 16. Oil palm FFB type classification based on BGLAM and ANN classifier.
Agriculture 12 01461 g016
Table 1. The literature on accurately assessing crop quality using machine learning techniques based on a variety of color spaces. Table from Ref. [51] is cited and updated with new implementation results.
Table 1. The literature on accurately assessing crop quality using machine learning techniques based on a variety of color spaces. Table from Ref. [51] is cited and updated with new implementation results.
ItemColor SpaceClassification TechniqueAccuracyRef.
Oil palmUV + RGB + NIR KNN and SVM93.80[25]
DatesJPGCNN99.32[26]
BananaRGB and GLCMCNN and MLP98.45[27]
Apple HSISVM95.00[52]
Apple L*a*b*MDA100.00[53]
AppleRGBSVM96.81[54]
Apple, pears and peachesRGBANN98.90[55]
PapayaLBP, HOG and GLCMKNN, SVM and Naıve Bayes100.00[56]
Avocado RGBK-Means82.22[57]
Dragon FruitHSV + RGBNaive Bayes86.60[58]
Banana L*a*b*LDA98.00[59]
Banana RGBANN96.00[60]
Blueberry RGBKNN and SK-Means85.00–98.00[61]
Date RGBK-Means99.60[62]
Lime RGBANN100.00[63]
Mango RGBSVM96.00[64]
Mango L*a*b*MDA90.00[65]
Mango L*a*b*LS-SVM88.00[66]
Oil palm L*a*b*ANN91.67[67]
Pepper HSVSVM93.89[68]
PaperHIS + RGBSIS99.00[69]
Persimmon RGB + L*a*b*QDA90.24[70]
Tomato HSVSVM90.80[71]
Tomato RGBDT94.29[72]
Tomato RGBLDA81.00[73]
Tomato L*a*b*ANN96.00[74]
RiceTexture Features (Gray)SVM86.00[75]
Soya HSIANN95.70[76]
Banana RGBFuzzy logicNA[77]
BananaRGB + CIE L*a*b*ANNNA[78]
Banana RGBCNN87.00[77]
Watermelon YCbCrANN86.51[79]
Watermelon VIS/NIRANN80.00[80]
Watermelon RGBANN73.33[81]
Tomato FTIRSVM99.00[82]
Kiwi Chemometrics MOS E-nosePLSR, SVM and RF99.40[83]
Coffee RGB + L*a*b* + Luv + YCbCr + HSVSVM92.00[84]
CoffeeRGB, HIS and L*a*b*PCA and K-Means100.00[85]
Cape Gooseberry RGB + HSV + L*a*b*ANN, DT, SVM and KNN93.02[86,87]
L* indicates lightness, and a* and b* are chromaticity coordinates.
Table 2. Confusion matrix.
Table 2. Confusion matrix.
TestActual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
TotalPN
Table 3. MSE and efficiency result comparison of the training and testing stages based on the FFB feature techniques.
Table 3. MSE and efficiency result comparison of the training and testing stages based on the FFB feature techniques.
FETModelsTraining StageTesting Stage
MSEEffMSEEff
Statistical[40 × 10 × 1]0.00800.95230.01900.8865
[40 × 20 × 1]0.00720.95680.01970.8820
[40 × 30 × 1]0.0038 *0.9775 *0.0182 *0.8914 *
Histogram[25 × 10 × 1]9.9402 × 10−5 *0.9994 *0.0136 *0.9189 *
[25 × 15 × 1]9.8412 × 10−50.99940.01630.9024
[25 × 20 × 1]9.5647 × 10−50.99940.01440.9140
GLCM[40 × 10 × 1]9.9991 × 10−50.99940.02910.8263
[40 × 20 × 1]1.2126 × 10−40.99930.03300.8030
[40 × 30 × 1]9.9943 × 10−5 *0.9994 *0.0278 *0.8338 *
BGLAM[45 × 11 × 1]8.9770 × 10−50.99950.02420.8556
[45 × 22 × 1]9.6629 × 10−5 *0.9994 *0.0177 *0.8942 *
[45 × 33 × 1]9.5860 × 10−50.99940.02150.8712
Gabor[40 × 10 × 1]9.9573 × 10−50.99940.0922 *0.4489 *
[40 × 20 × 1]9.9895 × 10−5 *0.9994 *0.09660.4228
[40 × 30 × 1]9.9873 × 10−50.99940.13300.2048
Notes: FET = Feature extraction techniques, MSE = Mean square error, Eff = Efficiency, * = The best result.
Table 4. Best results of RBF kernel function based on sigma values and c with FFB ripeness grading.
Table 4. Best results of RBF kernel function based on sigma values and c with FFB ripeness grading.
FETRBF-SigmaCAccuracy %
ROI1ROI2ROI3
Statistical 11000909082
Histogram50100899091
GLCM1500757879
BGLAM10500929092
Gabor10500768789
Note: FET = Feature extraction techniques.
Table 5. Results of test computing FFB type classification based on GLCM and BGLAM using ANN, KNN and SVM.
Table 5. Results of test computing FFB type classification based on GLCM and BGLAM using ANN, KNN and SVM.
Image SizeClassifiersTexture Feature Extraction Techniques
GLCMBGLAM
Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)
ROI1ANN89.004.0891.001.02
KNN87.004.0675.000.98
SVM79.004.790.001.74
ROI2ANN86.000.55389.000.43
KNN84.000.5076.000.38
SVM67.001.1377.001.14
ROI3ANN86.000.5693.00 **0.44 **
KNN77.000.5189.000.40
SVM69.001.2282.001.32
Note: ** = The best result.
Table 6. Results of test computing FFB ripeness classification based on statistical and histogram using ANN, KNN and SVM.
Table 6. Results of test computing FFB ripeness classification based on statistical and histogram using ANN, KNN and SVM.
TCImage Size Color Feature Extraction Techniques
Statistical Color FeatureColor Histogram
Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)
T1ANNROI193.00392.002.7
ROI292.002.592.001.4
ROI392.002.6594.00 **1.6 **
KNNROI181.002.381.002.5
ROI282.001.282.001.2
ROI382.001.0282.001.3
SVMROI181.005.381.007
ROI281.004.582.006
ROI380.00580.007
T2ANNROI193.00393.002.7
ROI292.002.593.001.4
ROI391.002.6594.00 **1.6 **
KNNROI192.002.393.002.5
ROI291.001.290.001.2
ROI392.001.0292.001.3
SVMROI190.005.390.007
ROI290.004.590.006
ROI382.00591.007
T3ANNROI191.00390.002.7
ROI288.002.593.00 **1.4 **
ROI390.002.6592.001.6
KNNROI174.002.378.002.5
ROI274.001.279.001.2
ROI387.001.0279.001.3
SVMROI169.005.378.007
ROI273.004.578.006
ROI372.00579.007
Notes: T = Types, T1 = Nigrescens, T2 = Oleifera, T3 = Virescens, C = Classifier, ** = The best result.
Table 7. Results of test computing FFB ripeness classification based on GLCM, GLAM and Gabor by using ANN, KNN and SVM.
Table 7. Results of test computing FFB ripeness classification based on GLCM, GLAM and Gabor by using ANN, KNN and SVM.
TCImage Size Texture Feature Extraction Techniques
GLCMBGLAMGabor
Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)
T1ANNROI191.003.689.00186.001.03
ROI289.001.790.000.4081.000.43
ROI391.002.292.00 **0.43 **80.000.44
KNNROI179.002.880.00174.001.04
ROI277.001.582.000.3977.000.39
ROI378.001.681.000.4077.000.41
SVMROI180.007.780.001.976.001.97
ROI276.005.581.00182.001.79
ROI379.00681.000.8574.002.00
T2ANNROI192.003.692.00183.001.03
ROI291.001.793.00 **0.40 **79.000.43
ROI392.002.293.000.4383.000.44
KNNROI179.002.890.00178.001.04
ROI280.001.591.000.3990.000.39
ROI391.001.692.000.4079.000.41
SVMROI175.007.792.001.976.001.97
ROI279.005.590.00188.001.79
ROI380.00692.000.8589.002.00
T3ANNROI187.003.688.00182.001.03
ROI289.001.793.00 **0.40 **81.000.43
ROI386.002.291.000.4377.000.44
KNNROI176.002.877.00174.001.04
ROI272.001.579.000.3973.000.39
ROI372.001.690.000.4078.000.41
SVMROI165.007.776.001.972.001.97
ROI264.005.579.00167.001.79
ROI367.00680.000.8568.002.00
Notes: T = Types, T1 = Nigrescens, T2 = Oleifera, T3 = Virescens, C = Classifier, ** = The best result.
Table 8. Results of test computing FFB ripeness classification with statistical, histogram, GLCM, BGLAM and Gabor using ANN, KNN and SVM.
Table 8. Results of test computing FFB ripeness classification with statistical, histogram, GLCM, BGLAM and Gabor using ANN, KNN and SVM.
TTechniqueImage Size
ROI1ROI2ROI3
Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)Testing Accuracy (%)Time (s)
T1Statistical and KNN79.004.0678.002.379.002.5
Histogram and SVM80.00981.00881.008.5
GLCM and ANN87.00 **3.7 **84.002.685.002.9
BGLAM and ANN76.001.679.000.878.000.83
Gabor and KNN78.001.674.000.7874.000.84
T2Statistical and KNN87.004.0690.002.379.002.5
Histogram and SVM88.00980.00879.008.5
GLCM and ANN91.003.789.002.688.002.9
BGLAM and SVM91.002.589.001.3691.00 **1.20 **
Gabor and KNN89.001.682.000.7886.000.84
T3Statistical and ANN84.004.0686.002.387.002.5
Histogram and SVM73.00977.00879.008.5
GLCM and ANN88.003.784.002.687.008.5
BGLAM and ANN84.001.682.000.887.00 **0.83 **
Gabor and KNN74.001.673.000.7878.000.84
Notes: T = Types, T1 = Nigrescens, T2 = Oleifera, T3 = Virescens, ** = The best result.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alfatni, M.S.M.; Khairunniza-Bejo, S.; Marhaban, M.H.B.; Saaed, O.M.B.; Mustapha, A.; Shariff, A.R.M. Towards a Real-Time Oil Palm Fruit Maturity System Using Supervised Classifiers Based on Feature Analysis. Agriculture 2022, 12, 1461. https://doi.org/10.3390/agriculture12091461

AMA Style

Alfatni MSM, Khairunniza-Bejo S, Marhaban MHB, Saaed OMB, Mustapha A, Shariff ARM. Towards a Real-Time Oil Palm Fruit Maturity System Using Supervised Classifiers Based on Feature Analysis. Agriculture. 2022; 12(9):1461. https://doi.org/10.3390/agriculture12091461

Chicago/Turabian Style

Alfatni, Meftah Salem M., Siti Khairunniza-Bejo, Mohammad Hamiruce B. Marhaban, Osama M. Ben Saaed, Aouache Mustapha, and Abdul Rashid Mohamed Shariff. 2022. "Towards a Real-Time Oil Palm Fruit Maturity System Using Supervised Classifiers Based on Feature Analysis" Agriculture 12, no. 9: 1461. https://doi.org/10.3390/agriculture12091461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop