Next Article in Journal
Microbial Consortium for Polycyclic Aromatic Hydrocarbons Degradation from Petroleum Hydrocarbon Polluted Soils in Rivers State, Nigeria
Previous Article in Journal
Developing an I4.0 Cyber-Physical System to Enhance Efficiency and Competitiveness in Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Machine Vision Technology in Citrus Production

1
Modern Agricultural Equipment Research Institute, Xihua University, Chengdu 610039, China
2
School of Mechanical Engineering, Xihua University, Chengdu 610039, China
3
Institute of Urban Agriculture, Chinese Academy of Agriculture Sciences, Chengdu 610213, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9334; https://doi.org/10.3390/app13169334
Submission received: 20 July 2023 / Revised: 9 August 2023 / Accepted: 14 August 2023 / Published: 17 August 2023

Abstract

:
The construction of standardized citrus orchards is the main trend in the future development of modern agriculture worldwide. As the most widely used and mature technology in the agricultural field, machine vision has greatly promoted the industrial development model of the citrus industry. This paper summarizes the application of machine vision technology including citrus pest and disease detection, harvesting identification and localization, and fruit grading. We compare the advantages and disadvantages of relevant research, and analyze the existing problems and prospects for future research. Due to the complex and changeable in-field environment, robots may experience unpredictable interference in the recognition process, which leads to errors in target fruit localization. The lack of datasets also affects the accuracy and stability of the algorithm. While expanding the dataset, it is necessary to conduct further research on the algorithm. In addition, the existing research focuses on indoor monitoring methods, which are not practical for the changeable outdoors environment. Therefore, realizing the diversity of sample datasets, designing agricultural robots suitable for complex environments, developing high-quality image processing hardware and intelligent parallel algorithms, and increasing dynamic monitoring methods are the future research directions. Although machine vision has certain limitations, it is still a technology with strong potential for development.

1. Introduction

Citrus is the fruit with the highest yield and the most extensive planting area in the world, and is one of the most important cash crops in the world. It is planted in tropical and subtropical regions. It is rich in vitamin C, which can meet people’s daily needs and help the body’s immune system [1]. Global citrus production has continued to grow steadily in recent years, with total production from 2010 to 2023 exceeding 130 million tons, a growth rate of about 125 percent. Every link of citrus production is crucial and directly or indirectly affects the yield of citrus. With the development of agricultural modernization, the efficiency and accuracy of citrus production are restricted by the shortage of experts, time, and effort.
In recent years, machine vision has been applied to the field of agricultural machinery including agricultural picking robots, crop seed and fruit sorting, crop growth state detection, and detection of disease, insect, pests, and weeds. Machine vision technology plays an important role in agricultural automatic production. Machine vision is based on computer vision, which is composed of a hardware system and software system. The hardware part includes light source equipment, image processing card, and auxiliary actuator, etc. The software part mainly refers to image processing visual software and image processing methods. The camera is used to shoot the target under the appropriate environment. After obtaining the image information, the visual software is used to process the acquired image, and then the signal is transmitted through the corresponding control device to make the executive device start to work.
The application of machine vision technology in the production process of citrus can effectively improve production efficiency, such as disease and pest detection, harvesting identification and positioning, fruit grading, and other links. Machine vision technology has good application prospects, and, driven by this technology, many aspects of citrus production can be further developed. This article compares the advantages and disadvantages of relevant research through the application of machine vision technology in citrus disease and pest detection, picking recognition and positioning, and fruit grading. It analyzes the current problems and prospects for future research, which can effectively improve the accuracy and efficiency of various stages of citrus production and provide certain reference value for the development direction of machine vision technology in agriculture.

2. Pest and Disease Detection

The quality of citrus is not only affected by climate, environment, and planting conditions, but is also threatened by diseases and pests [2]. The quantity of citrus is huge, and there are many types of diseases and pests that are highly harmful. The diseased plants can quickly spread to other healthy plants, leading to a significant reduction in, or even death of, citrus trees. Citrus diseases and pests affect the development of the agricultural economy and are one of the main reasons for the reduction in citrus crop yield. Therefore, it is very important to help fruit farmers effectively and quickly judge and prevent citrus diseases before harvesting, and to control citrus diseases and reduce losses in a timely manner. At present, the main diseases commonly found in citrus include huanglongbing, canker, anthrax, sooty mold, fatty spot macular, scab, foot rot, resin disease, and so on [3], and the main insect pests include red spider, psyllid, stink bug, scale insect, leaf miner, long-horned beetle, and so on [4]. The disease occurs mainly in leaves and fruits, so we have a comprehensive discussion of the two parts.

2.1. Pest and Disease Identification Based on RGB Image

2.1.1. Traditional Machine Learning Disease Recognition

The traditional machine learning method first uses mobile devices such as mobile phones or cameras to capture RGB images of plant diseases and complete the collection of datasets. Then the preprocessing technology is used to remove the complex background of the image or segment unnecessary regions to highlight the disease area. Then, artificially designed image features are used to extract relevant information about the disease. Finally, supervised or unsupervised classifiers are used to classify the images. Among them, the corresponding feature extraction can be carried out by utilizing the characteristics of image phase, color space, texture, and shape. Figure 1 shows the traditional method identification process.
Wei Jiaxi [5] proposed a disease image segmentation method based on naive Bayes for citrus leaf canker disease. The method was compared with threshold method and a support-vector-machine-based segmentation method, and evaluated by combining subjective scatter plot analysis and image mis-segmentation rate. Although the naive-Bayes-based segmentation method takes a longer time, the mis-segmentation rate is the smallest. Zhang Jianmin et al. [6] proposed an improved image retrieval algorithm based on HSV color histogram space for citrus disease monitoring, which uses color space to display color changes and perform non-uniform quantizing processing on leaf color. The proposed HSV non-uniform quantizing (16:4:4) can more clearly reflect the color characteristics of citrus diseases and reflect the global features of the image than the traditional scheme (8:3:3). Zhang, M. [7] detected citrus canker disease collected outdoors, introduced a classification strategy, proposed a canker feature descriptor combined with leaf color and texture information, and used the AdaBoost algorithm to select the most important features. The experiment proves that this method has a high classification accuracy. H. Ali et al. [8] used Delta E, RGB, HSV, and LBP as descriptors for citrus disease images. Compared with other classifiers, the bagged tree ensemble classifier has better discrimination ability for color features. The overall classification results show 99% accuracy, 1.0 area under the curve, and 99.7% sensitivity. Deng, X. [9] proposed a method for huanglongbing detection based on visible spectrum image processing and C-SVC. In addition to LBP features, feature vectors were constructed by using statistical information of hue, saturation, intensity, and gray image histogram, and dimension reduction was performed by PCA. Gavhale, K.R. [10] proposed a method for detecting diseased areas of citrus leaves. The RGB images were converted into YCbCr and lab space, and k-means clustering was used for segmentation to detect the diseased pixels on the leaves. Finally, the candidate regions were classified using grey level co-occurrence matrix features.
Traditional machine learning methods are used to identify citrus diseases, such as studying the color space of disease leaves, changing disease segmentation methods, selecting prominent features, etc. Due to the complex and irregular diseased areas of crop disease images, it is difficult for traditional methods to select and extract the best features from the images for disease recognition, which requires a certain time, efficiency, and manpower for preprocessing. The process of manual extraction of disease features is slow, which has certain subjectivity and limitations, leading to low recognition accuracy.

2.1.2. Deep Learning for Pest and Disease Recognition

As a branch of machine learning, deep learning has better effect on feature extraction. The current common deep learning networks are shown in Figure 2.
On the basis of traditional methods, deep-learning-based citrus disease recognition has been studied. Figure 3 shows the image recognition process of citrus pests and diseases under the deep learning method. In order to improve the recognition accuracy, Luo Jun [11] proposed a method based on the ResNet34 model for citrus multi category disease recognition in complex natural environments to classify disease, and at the same time, it combined unsupervised clustering image segmentation, depth separable convolution, and transfer learning to identify the degree of disease. Based on convolutional neural network and hyperspectral technology, Wang Jiantao et al. [12] collected 81 bands of citrus disease leaves at 480–900 nm and input them into the VGG-16 model for classification. The average accuracy of the model was 98.75%. Shi Yu et al. [13] added the SE module of the attention mechanism based on the Res2Net algorithm, resulting in an E-Res2Net algorithm that enhanced network feature extraction and classified citrus leaf diseases into eight categories. Hu Dingyi [14] used the data augmentation strategy ORGAA and constructed an ORGNet classification model based on PC-DARTS neural network structure to improve the recognition accuracy of the model in order to solve the problem of small sample size in the pre-harvest lesion citrus dataset. When improving the classification and detection accuracy of post-harvest defective citrus, ResNet18 deep residual network was used as the feature extraction network for SSD, retaining the C4 and C5 feature maps. Comparative experiments were conducted on the input image resolution, and the final result was determined to be 768 × 768 pixels. Song Zhongshan et al. [15] proposed a faster R-CNN regional detection neural network based on binarization, transforming the full connection layer in the original model into a binary system full convolutional neural network. The improved model is lighter in size and the recognition time is reduced by 0.53 s compared to the original model. Hu Jiapei et al. [16] improved the neck network of the YOLOv4-Tiny model, using the detailed information of the shallow network to improve the model’s ability to recognize small targets of the citrus psyllids. Mosaic was used for data augmentation, and the average recognition precision of the new model for citrus psyllids reached 96.16%. Su Hong et al. [17] used an R-CNN model based on 33-layer ResNet main trunk network to identify citrus huanglongbing, red spider infection, and canker disease. Although there were few convolutional layer networks selected, they could still achieve a high recognition accuracy. Zhang, X. et al. [18] proposed a deep learning algorithm composed of detection network and classification network to automatically identify orchard citrus diseases. The optimized YOLO-V4 network was used for detection and EfficientNet network was used for classification.
Generally, the network model is a specific single model and sometimes it cannot accurately extract and identify the disease characteristics, so the concept of fusion is proposed. Huang Ping et al. [19] designed an improved VGG19-INC model based on VGG19 for the characteristics of citrus diseases with high similarity that could not be well identified by a single model and added the Inception high-dimensional feature fusion module to make the new model have higher recognition precision and strong generalization ability. The S-ResNet model is obtained by discarding the identity mapping in partial residual structure and the M-ResNet model is obtained by using 7 × 7 convolution kernel to express deep features. The new F-ResNet model obtained by the fusion of S-ResNet and M-ResNet by Tie Jun et al. [20] solves the problems of weak generalization ability and poor robustness of a single model and improves the accuracy of citrus disease image recognition. Li Hao [21] studied the citrus leaf detection algorithm based on YOLOv4. Based on convolutional neural networks, features extracted from Densenet and Xception networks were fused to extract features for citrus red spider leaves, canker leaves, leaf miner leaves, and healthy leaves. Transfer learning is introduced for insufficient datasets. The proposed fusion convolution classification model reduced the algorithm training time and improved the model detection accuracy.
Some foreign scholars have also carried out corresponding research. Sharif, M. et al. [22] proposed a hybrid method based on optimized weighted segmentation and feature selection to automatically detect and classify six citrus diseases. In the preprocessing stage, the top hat filter and Gaussian function are used to improve the contrast of the input image, which is helpful to spot segmentation of the disease and improve the segmentation accuracy. The feature selection methods based on PCA score, entropy, and skewness covariance vector were used to select the most prominent features, and the selected features were classified by multi-class SVM. Khanramaki, M. et al. [23] proposed a deep convolutional-neural-network-integrated classifier for the identification of citrus pests, including citrus leaf miner, sooty mold, and pulvinaria. The diversity of data level, feature level, and classifier level was considered to increase the diversity of basic classifiers.
In summary, the deep learning method can achieve relatively ideal results in disease identification. Table 1 compares the advantages and disadvantages of the above mentioned deep learning methods. Improving some modules of the model, using small sample transfer learning, and model fusion can enrich the network model, improve the robustness of the model, and improve the recognition accuracy.

2.2. Identification of Pests and Diseases Based on Spectral Technology

With the development of modern agriculture, spectral imaging technology has been gradually applied to the identification of crop diseases, instead of the traditional detection methods. By combining spectral analysis technology and image analysis technology, spectral imaging technology can conduct qualitative and quantitative analysis on the tested objects and realize the rapid detection of citrus diseases [24].
Compared with traditional recognition technology, hyperspectral imaging technology can achieve non-destructive, real-time, and highly accurate detection objects, acquire image information of targets in any wave band, acquire spectral information of each pixel on the image, and detect changes in internal physiological information of early crop infection diseases [25]. Figure 4 shows the test diagram of hyperspectral imaging system.
Hyperspectral characteristics are unique, and there are obvious differences between healthy and infected spectra, so it is possible to identify citrus diseases early based on hyperspectral characteristics. Wu Yelan et al. [26] used a hyperspectral imaging instrument to collect the reflectance of the regions of interest within 478–900 nm of citrus healthy leaves, canker leaves, herbicide-damaged leaves, red spider leaves, and sooty-mold-diseased leaves as spectral information, and preprocessed the spectral information by using three methods: first-order derivative, multiple scattering correction, and standard normal transformation. PCA was used to extract characteristic wavelength from the preprocessed data, and SVM and random forest models were established to identify the disease. In 2023, Wu Yelan et al. [27] proposed a bidirectional gated recurrent unit recurrent neural network (Att-BiGRU-RNN) classification model that integrated an attention mechanism, which can extract deep features from spectral information and improve the contribution rate of important spectral features to the classification model. The overall classification accuracy can reach 98.21%. Deng et al. [28] proposed a non-destructive detection method for huanglongbing in the fields based on hyperspectral reflectance. The feature strap extraction method based on entropy distance and sequential backward selection provides a choice for dimensionality reduction, and uses machine learning algorithms such as logistic regression, decision tree, support vector machine, k-nearest neighbor, linear discriminant analysis, and ensemble learning to classify diseases. Tian et al. [29] used image processing methods such as principal component analysis, pseudo color image conversion technology, and improved watershed segmentation algorithm (IWSA) to analyze the feasibility of detecting citrus decay based on scanned sound and hyperspectral transmission images (325–1098 nm) of decayed citrus.
Citrus diseases are complex and changeable, and some similar diseases are difficult to identify. In addition to the above processing of spectral information to improve the classification accuracy, some researchers use the supervised partial least square discriminant analysis method to better classify and identify different diseases. Liu Yande et al. [30] collected five kinds of hyperspectral images of citrus leaves in the 380–1080 nm spectral range, and established two discriminative model, namely, least squares support vector machine (LS-SVM) and partial least squares discriminant analysis (PLS-DA), to conduct nondestructive detection and disease grade discrimination for citrus huanglongbing. The results show that the latter has better prediction ability, with a misjudgment rate of only 5.6%.Yao Weixuan et al. [31] extracted the region of interest of the hyperspectral image on the surface of the collected citrus leaves, counted the average spectral data, carried out vegetation and plant operations, and identified and classified diseases through the PLS-DA discriminative model. Luo et al. [32] collected hyperspectral images of citrus visible and near-infrared light from three types of tissues with a wavelength range of 325–1000 nm. The spectral variables were optimized using a combination of bootstrap soft shrinkage (BOSS) and BOSS sequential projection algorithm (BOSS-SPA) algorithms. PLS-DA models based on three types of tissues and two types of tissues were constructed, and a fast multispectral image processing algorithm combined with global threshold theory was used for detection of decayed citrus.
Table 2 compares three methods—traditional method, deep learning method and hyperspectral imaging technique.

3. Fruit Grading

Grading and testing of citrus is a key step in the commercialization of citrus. The improvement in citrus yield raises the requirement of citrus fruit quality to a certain extent. Early fruit quality detection relied on manual labor, and later machinery replaced most of the labor force. However, due to the low accuracy of machinery, a fruit detection system based on machine vision was used to process the image and realize fruit classification recognition and quality detection [33]. In this section, the ripeness, defect, and shape of citrus are reviewed. Figure 5 shows the flow chart of the citrus grading system.

3.1. Maturity Grading

The identification of citrus ripeness is an aspect of detecting the quality of citrus and a prerequisite for selective harvesting. Citrus with different ripeness has different uses. Zhang Xiaohua et al. [34] converted the acquired citrus RGB image into the Lab color space, finally selected a component significantly different from the background color to eliminate the background, and applied Hough circle transform to recognize and count mature citrus. Hu Youcheng et al. [35] used the color features of RGB images to generate feature vectors and conduct feature dimension reduction to obtain the best ROI for segmentation, and used SVM algorithm for segmentation and recognition. In the RGB color space, Wang Jianhei et al. [36] used the ratio transformation between R and G to obtain images that were orthogonal and weakly correlated with the fusion images of H and S. Combined with the mask method, object features were effectively extracted, and citrus fruits with different maturity levels were effectively recognized. Chen, S. et al. [37] proposed a citrus fruit maturity method combining visual salience and convolutional neural network to identify the three maturity levels of citrus fruits. The improved visual saliency algorithm MSSS was used to detect that the brightness of mature fruits was high and uniform, followed by semi-mature fruits, and immature fruits that were very dark. After network comparison, the four-channel ResNet34 network was selected to combine the information of RGB images with the significance map to determine the fruit maturity level. Wajid, A. et al. [38] proposed a method to distinguish orange ripening, unripening and scaling, or rotting, extracted image features based on boundary/internal pixel classification (BIC), including RGB color space and gray value, and studied the performance and application degree of three algorithms, naive Bayes, artificial neural network, and decision tree. The results show that decision tree is more effective than other techniques. Momeny, M. et al. [39] developed a more robust deep convolutional neural network model to detect citrus maturity levels and black spot disease using an effective noise based data augmentation method that includes Gaussian, speckle, poisson, and salt and pepper noise. Li Lang et al. [40] used an industrial camera combined with a flipping mechanism to obtain the complete surface information of citrus in motion. After preprocessing, two-dimensional coloring ratio was obtained and its arithmetic average was taken to calculate the coloring rate for discrimination and classification. Yang Zhangpeng [41] converted the RGB color space of citrus into HSV color space and obtained H component. The color consistency was detected by tonal interval statistical method. The results were divided into four grades: orange fruit, yellow fruit, light green fruit, and dark green fruit, and the overall classification accuracy reached 93.75%. Lu Jun et al. [42] used probabilistic neural networks to automatically grade citrus based on color moments and statistical texture features. In general, the extraction and conversion of fruit color space can effectively grade the maturity of citrus. In addition, the study of skin brightness and noise can also achieve the same purpose.

3.2. Defect Grading

The external defects of citrus fruits affect the storage and internal quality of citrus fruits [43], so the detection of defects is of great significance for the quality and value assurance of citrus. Cao Leping et al. [44] calculated the probability of pixel distribution in different color intervals of citrus fruits of pests and diseases, and used the calculated complexity measure C(Y) and Shannon information entropy H(Y) as search terms to carry out machine recognition of defective fruit on the search table of pests and diseases. Wang Xu et al. [45] used the illuminance reflection model [46] to perform brightness correction on citrus images with uneven contrast. The corrected image can be fully segmented into defect areas on the fruit surface using threshold segmentation methods. Hu, W. et al. [47] used the dual-light system to capture the invisible defect images of citrus, and optimized the YOLOv5 model by integrating the attention mechanism CBAM and using the DIoU loss function, which improved the average precision by 5.8% compared with the original model.

3.3. Shape Grading

Quantifying the shape of the citrus can add value to the citrus. Julie [48] compared the performance of different invariant moments, used Zernike moments to describe the shape features of citrus, calculated the high-order Zernike moments of citrus images, and used k-means clustering method for shape recognition. Iqbal, S. et al. [49] used four image processing methods, namely, radius feature method, area method, perimeter method, and LiDAR-sensor-based to estimate and classify the diameter of citrus fruits into three categories, and improved the existing citrus fruit size measurement methods based on light detection and distance measurement sensors.

4. Harvesting

With the sustainable development of modern agriculture, automation technology is more and more important in the field of agricultural production. The traditional production method of citrus orchards has many problems that are not conducive to large-scale development, for example high cost and high labor intensity. These problems seriously constrain the profitability of the citrus industry. Therefore, an intelligent and automatic harvesting method is needed. The rapid and accurate identification and positioning of outdoor citrus, as well as the improvement of the picking robot’s path planning algorithm, can improve the harvesting efficiency. Based on machine vision technology, some scholars have carried out corresponding research. Figure 6 shows the picking diagram of citrus-picking robot.
Currently, citrus harvesting operations are mainly completed by manpower, and the large number of citrus results in high labor intensity and low productivity. Therefore, accurate identification of picking points for citrus crops is particularly important. Zhang Lu [50] proposed a citrus picking point recognition system based on deep learning and feature analysis, as well as a fruit positioning system based on visual servo laser directional ranging, by separating citrus recognition and fruit positioning. This system solved the problems of light changes, bright spots, and shadow blocking during the harvesting process. Bi Song et al. [51] proposed a citrus picking point recognition method based on Hough transform, which can effectively improve the recognition accuracy by collecting citrus images under natural environment, constructing citrus recognition models, and segmenting effective fruit regions. Tang Yang et al. [52] proposed a network model based on the improved YOLOv3-tiny, replacing the original loss function with DIOU loss function, using MobileNetv3-Small as the backbone extraction network and adding a new residual structure, and adding a simplified spatial pyramid pool structure into the feature extraction network. The average recognition accuracy of the improved network model is 96.52%.
The path planning of the picking robot can save time and improve the picking efficiency. Huang Xubin [53] used the depth camera to collect citrus images, input them into the improved mask R-CNN algorithm for target detection, then converted the pixel coordinates of citrus into world coordinates and input them into the improved ant colony algorithm for picking path planning. Liu Dun [54] built a simulation experiment platform based on the ROS system to reconstruct the citrus harvesting environment, and proposed the picking motion planning algorithm PGI-RRT* algorithm based on the pre-picking guiding point to realize the movement planning obstacle avoidance algorithm of citrus-picking robotic arm in unstructured environments. Chen Xin et al. [55] abstracted citrus-picking path-planning as a traveling dealer problem. The improved ant colony algorithm introduced an adaptive pheromone concentration updating mechanism that changes with time, which can realize the optimal picking path.
In the natural environment, the growth position of citrus is random and complex, which is interfered with and blocked by branches, leaves, light, and other external environmental factors, so that the picking mechanical arm makes mistakes in the process of movement. In order to make the mechanical arm reach the target point accurately, it is necessary to study the path-planning of the mechanical arm. Jiang Kun [56] converted the pixel coordinates of citrus fruit into world coordinates, built a three-dimensional target positioning model, adopted the D-H method to establish the kinematic model of the picking manipulator, and deduced the trajectory planning of quincal B-spline curve to carry out simulation experiments. Liu Dun et al. [57] proposed an improved motion planning algorithm for a citrus-picking manipulator based on Informed RRT*. Adding a pre-picking guide point between the starting point and the target point can realize one-time planning of the manipulator’s motion path that meets the picking requirements. Simulation experiments show that the planning time is reduced by 46%. Xiong Chunyuan et al. [58] proposed a path-planning method for citrus-picking robot arms based on deep reinforcement learning and artificial potential field in an unstructured environment. By introducing long and short term memory results, they designed the network structure of LSTM-Actor and LSTM-Critic. Three algorithms, LSTM-SAC, LSTM-DDPG, and LSTM-connect, are compared comprehensively. It shows that the full use of neural network can explore the feasible path in complex environment faster and more accurately.
In addition to picking and identifying citrus in static state, disturbed state research is also a development direction. Xiong Juntao et al. [59] carried out dynamic analysis on citrus under disturbed state, and used the improved k-means clustering segmentation method combined with the optimized Hough circle fitting method to segment the selected citrus images. Then, Hough linear fitting was performed on the binarized images to determine the effective picking points. Ning Zhigang et al. [60] separated the two collected citrus images from the background, and used the interframe difference method, the horizontal minimum bounding rectangle method, and the establishment of a competitive function method to identify the oscillation area of the segmented orange image. Although some progress has been made, recognition can only target fruits in a single state of motion, and, overall, this research is currently not very mature.

5. Existing Problems and Prospect

With the rapid development of computer hardware and image processing technology, the application of machine vision technology in the field of citrus production is more and more extensive, mainly focused on the picking robot location recognition, pest and disease monitoring, and fruit quality evaluation, and gradually realized industrialization and intelligence. In recent years, the research on citrus production based on machine vision technology has made great progress, but there are still some problems. Future research and application will focus on the following aspects:
(1)
Achieve diversity of dataset samples
At present, the citrus research dataset based on machine vision technology mainly consists of RGB color images taken by cameras or mobile phones and spectral images collected by spectral imagers, while there are few 3D image datasets containing more information. Subsequent studies can use RGB-D cameras to shoot and obtain color image information and depth image information at the same time to improve the accuracy of citrus target recognition. In addition, the number of open source datasets of citrus images is less than that of other fruit crops, and the disease types are mostly huanglongbing and canker disease, which affects the accuracy and stability of identification to a certain extent. In the future, the number of datasets and disease types should be increased to meet more research needs.
(2)
Developing agricultural robots suitable for complex environments
Machine vision technology is a modern development of emerging technology, and the prospect and potential is endless. Compared with standardized and idealized industrial environments, irregular natural agricultural environments are more complex. When citrus is affected by unpredictable factors such as different growth states, light intensity, and leaf occlusion, the robot under working conditions may receive wrong signals due to these complex environmental changes, thus, making wrong judgments or failing to make the next choice immediately, which seriously interferes with image extraction and recognition. Therefore, how to improve on the basis of exiting agricultural robots and overcome the complexity and irregularity of the natural environment is a key point of future development.
(3)
Develop high-quality image processing hardware and intelligent parallel algorithms
In visual information image processing, computers are needed to carry out a series of follow-up work. In order to achieve fast and accurate processing, efficient serves are needed to improve the quality of hardware to reduce the burden of the host. Deep learning network models have a large number of parameters, large memory, and take a long time during operation, which requires high quality hardware devices. At present, the most commonly used algorithm is the serial algorithm, whose processing speed cannot meet the work requirements, and some algorithms are only suitable for a particular crop, due to a lack of universality. The parallel algorithm is developed or the parallel serial algorithm is realized, so as to improve the analysis and processing efficiency of image system.
(4)
Add dynamic monitoring mode
In the process of citrus production, whether it is quality monitoring or picking recognition and positioning, the target objects are mostly static, and there are few fruit monitoring studies under the state of movement. In reality, not all objects are absolutely static, and when affected by the external environment, citrus swings to a certain extent, and most of the existing studies have been carried out under artificial ideal environment, so the dynamic recognition effect is poor. Therefore, considering the factors encountered in the real environment, increasing the dynamic monitoring of target objects is also a major direction of future research.

6. Conclusions

This paper reviewed the literature on the detection of pests and diseases, harvesting, and fruit grading in citrus production. In the detection of pests and disease, traditional machine learning methods, deep learning methods, and hyperspectral imaging technology are mainly discussed, and the common points and advantages of relevant studies are compared. As the most mainstream method at present, the importance of deep learning cannot be underestimated. The research on ripeness, defects, and shape of fruit grading was described. In the harvesting process, agricultural robots are commonly used to recognize and position the target, so that the robot arm can grasp the target object. The problem in the above research and the prospect of future research are analyzed. The lack of open-source citrus datasets means the model cannot better train other models and extract image features, resulting in low recognition accuracy. Due to the variety in outdoor environments, the citrus-picking robot will have certain errors in fruit localization and recognition in complex natural environments, and the picking accuracy and efficiency will be reduced. In the process of network model training, computer hardware deployment also plays an indispensable role; good hardware equipment is the basic condition for success, and high-precision and efficient algorithms are equally important. The above problems restrict the research of machine vision technology in the field of citrus production to some extent. In spite of this, machine vision technology still has a good prospect for development, and there is an irreplaceable trend in agriculture. In the future, whether for citrus or other fruits or crops, machine vision technology will play an invaluable role.

Author Contributions

Conceptualization, K.P. and W.M.; writing—original draft preparation, K.P. and Z.T.; writing—review and editing, W.M., J.L. and Z.Y.; visualization, K.P. All authors have read and agreed to the published version of the manuscript.

Funding

The Agricultural Science and Technology Innovation Program (ASTIP2023-34-IUA-10). Chengdu Agricultural Science and technology center local financial special fund project (NASC2022KR08). Research and Application of Sky-Ground Integrated Hydroponic System for Smart Farms (NASC2023ST03). Development and application of AI-based intelligent equipment for picking fresh tea leaves in hilly terrain (Sichuan Science and Technology Plan Project 2022YFG0147). Factory Agriculture Technology and Equipment (ASTIP2022-34-IUA-10).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, Q. A Study on Citrus Production Efficiency in China. Master’s Thesis, Zhongnan University of Economics and Law, Wuhan, China, 2020. [Google Scholar]
  2. Li, Z.; Wang, Z. Influential factors on the fruit quality of citrus. Guangxi Agric. Sci. 2006, 307–310. [Google Scholar]
  3. Liao, H. Occurrence and control of common diseases of citrus. Mod. Agric. Sci. Technol. 2022, 818, 60–61. [Google Scholar]
  4. Liu, Q. Technical points of citrus pest control. World Trop. Agric. Inf. 2022, 541, 61–62. [Google Scholar]
  5. Wei, J. The Study of Image Segmentation of Citrus Canker Based on Computer Vision. Master’s Thesis, Guangxi University, Nanning, China, 2019. [Google Scholar]
  6. Zhang, J.; Yu, D. Improved Algorithm of Color Space for Citrus Leaf Disease Monitoring. J. Agric. Mech. Res. 2019, 41, 38–42+47. [Google Scholar] [CrossRef]
  7. Zhang, M.; Meng, Q. Automatic citrus canker detection from leaf images captured in field. Pattern Recognit. Lett. 2011, 32, 2036–2046. [Google Scholar] [CrossRef]
  8. Ali, H.; Lali, M.I.; Nawaz, M.Z.; Sharif, M.; Saleem, B.A. Symptom based automated detection of citrus diseases using color histogram and textural descriptors. Comput. Electron. Agric. 2017, 138, 92–104. [Google Scholar] [CrossRef]
  9. Deng, X.; Lan, Y.; Hong, T.; Chen, J. Citrus greening detection using visible spectrum imaging and CSVC. Comput. Electron. Agric. 2016, 130, 177–183. [Google Scholar] [CrossRef]
  10. Gavhale, K.R.; Gawande, U.; Hajari, K.O. Unhealthy region of citrus leaf detection using image processing techniques. In Proceedings of the International Conference for Convergence for Technology 2014, Pune, India, 6–8 April 2014; pp. 1–6. [Google Scholar]
  11. Luo, J. Research on Citrus Disease Recognition Based on Deep Learning. Master’s Thesis, South Central University for Nationalities, Wuhan, China, 2021. [Google Scholar]
  12. Wang, J.; Wu, Y.; Liao, Y.; Chen, Y. Hyperspectral classification of citrus diseased leaves based on convolutional neural network. Inf. Technol. Informatiz. 2020, 240, 84–87. [Google Scholar]
  13. Shi, Y.; Li, M. Convolutional Neural Network Based Classification Algorithm for Citrus Disease Leaves. Inf. Comput. 2022, 34, 47–51. [Google Scholar]
  14. Hu, D. Research on Recognition and Detection Method of Detective Citrus Based on Deep Learning. Master’s Thesis, Huazhong Agriculture University, Wuhan, China, 2021. [Google Scholar]
  15. Song, Z.; Wang, J.; Zheng, L.; Tie, J.; Zhu, Z. Research on citrus pest identification based on RCNN. J. Chin. Agric. Mech. 2022, 43, 150–158. [Google Scholar] [CrossRef]
  16. Hu, J.; Li, Z.; Huang, H.; Hong, T.; Jiang, S.; Zeng, J. Citrus psyllid detection based on improved YOLOv4-Tiny model. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2021, 37, 197–203. [Google Scholar]
  17. Su, H.; Wen, G.; Xie, W.; Wei, M.; Wang, X. Research on Citrus Pest and Disease Recognition Method in Guangxi Based on Regional Convolutional Neural Network Mode. Southwest China J. Agric. Sci. 2020, 33, 805–810. [Google Scholar] [CrossRef]
  18. Zhang, X.; Xun, Y.; Chen, Y. Automated identification of citrus diseases in orchards using deep learning. Biosyst. Eng. 2022, 223, 249–258. [Google Scholar] [CrossRef]
  19. Huang, P.; Bi, L.; Mo, Y.; Qin, B.; Lin, L.; Wan, H. Image Recognition Method of Citrus Diseases and Pests Based on Multiscale Feature Fusion. Radio Eng. 2022, 52, 407–416. [Google Scholar]
  20. Tie, J.; Luo, J.; Zheng, L.; Mo, H.; Long, J. Citrus disease recognition based on improved residual network. J. South Cent. Univ. Natl. (Nat. Sci. Ed.) 2021, 40, 621–630. [Google Scholar]
  21. Li, H. Research on Intelligent Online Monitoring System of Citrus Diseases Based on Deep Learning. Master’s Thesis, University of electronic science and technology of China, Chengdu, China, 2021. [Google Scholar]
  22. Sharif, M.; Khan, M.A.; Iqbal, Z.; Azam, M.F.; Lali, M.I.U.; Javed, M.Y. Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electron. Agric. 2018, 150, 220–234. [Google Scholar] [CrossRef]
  23. Khanramaki, M.; Askari Asli-Ardeh, E.; Kozegar, E. Citrus pests classification using an ensemble of deep learning models. Comput. Electron. Agric. 2021, 186, 106192. [Google Scholar] [CrossRef]
  24. Chen, B.; Yao, L. Advances in the application of spectral detection in the diagnosis of citrus Huanglongbing. J. Gannan Norm. Univ. 2018, 39, 69–72. [Google Scholar] [CrossRef]
  25. Hu, Z.; Zhang, Y.; Shang, J.; Zhang, K. Research progress of hyperspectral images in detection and identification of crop diseases. Jiangsu Agric. Sci. 2022, 50, 49–55. [Google Scholar] [CrossRef]
  26. Wu, Y.; Chen, Y.; Lian, X.; Liao, Y.; Gao, C.; Guan, H.; Yu, C. Study on the identification method of citrus leaves based on hyperspectral imaging technique. Spectrosc. Spectr. Anal. 2021, 41, 3837–3843. [Google Scholar]
  27. Wu, Y.; Guan, H.; Lian, X.; Yu, C.; Liao, Y. Classification of citrus diseased leaves based on hyperspectral and Att-BiGRU-RNN. Trans. Chin. Soc. Agric. Mach. 2023, 54, 216–223. [Google Scholar]
  28. Deng, X.; Huang, Z.; Zheng, Z.; Lan, Y.; Dai, F. Field detection and classification of citrus Huanglongbing based on hyperspectral reflectance. Comput. Electron. Agric. 2019, 167, 105006. [Google Scholar] [CrossRef]
  29. Tian, X.; Fan, S.; Huang, W.; Wang, Z.; Li, J. Detection of early decay on citrus using hyperspectral transmittance imaging technology coupled with principal component analysis and improved watershed segmentation algorithms. Postharvest Biol. Technol. 2020, 161, 111071. [Google Scholar] [CrossRef]
  30. Liu, Y.; Xiao, H.; Sun, X.; Zeng, T.; Zhang, Z.; Liu, W. Non-destructive Detection of Citrus Huanglong Disease Using Hyperspectral Image Technique. Trans. Chin. Soc. Agric. Mach. 2016, 47, 231–238+277. [Google Scholar]
  31. Yao, W.; Dai, F.; Yang, K.; Deng, X.; Li, S.; Zhou, K.; Li, P. Citrus huanglongbing nondestructive testing and classification model construction based on spectral information. Guangdong Agric. Sci. 2014, 41, 65–69+237. [Google Scholar] [CrossRef]
  32. Luo, W.; Fan, G.; Tian, P.; Dong, W.; Zhang, H.; Zhan, B. Spectrum classification of citrus tissues infected by fungi and multispectral image identification of early rotten oranges. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 279, 121412. [Google Scholar] [CrossRef]
  33. Chen, B.; Chen, G. Review of fruit classification recognition and maturity detection technology. Comput. Era. 2022, 361, 62–65. [Google Scholar] [CrossRef]
  34. Zhang, X.; Ma, R.; Wu, Z.; Huang, Z.; Wang, J. Fast Detection and Yield Estimation of Ripe Citrus Fruit Based on Machine Vision. Guangdong Agric. Sci. 2019, 46, 156–161. [Google Scholar] [CrossRef]
  35. Hu, Y.; Xu, H.; Huang, L.; Liu, S.; Yang, C. Segmentation and recognition of mature citrus and foliage based on regional features. Mod. Manuf. Eng. 2019, 464, 70–76. [Google Scholar] [CrossRef]
  36. Wang, J.; Ouyang, Q.; Chen, Q.; Cai, J.; Wang, F.; Lv, Q. Adaptive recognition of different maturity citrus in natural scenes. Opt. Optoelectron. Technol. 2009, 7, 56–58+62. [Google Scholar]
  37. Chen, S.; Xiong, J.; Jiao, J.; Xie, Z.; Huo, Z.; Hu, W. Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map. Precis. Agric. 2022, 23, 1515–1531. [Google Scholar] [CrossRef]
  38. Wajid, A.; Singh, N.K.; Junjun, P.; Mughal, M.A. Recognition of ripe, unripe and scaled condition of orange citrus based on decision tree classification. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–4. [Google Scholar]
  39. Momeny, M.; Jahanbakhshi, A.; Neshat, A.A.; Hadipour-Rokni, R.; Zhang, Y.-D.; Ampatzidis, Y. Detection of citrus black spot disease and ripeness level in orange fruit using learning-to-augment incorporated deep networks. Ecol. Inform. 2022, 71, 101829. [Google Scholar] [CrossRef]
  40. Li, L.; Wen, T.; Dai, X.; Wang, Z. Online detection and grading system for citrus full-surface color. Food Mach. 2022, 38, 121–126. [Google Scholar] [CrossRef]
  41. Yang, Z. Research on Grading Technology and Control System of Citrus Surface Color. Master’s Thesis, Central South University of Forestry and Technology, Changsha, China, 2019. [Google Scholar]
  42. Lu, J.; Fu, X.; Miao, C.; Zhang, W.; Ding, R. Citrus automatic grading by using color and texture features. J. Huazhong Agric. Univ. 2012, 31, 783–786. [Google Scholar] [CrossRef]
  43. Sun, R.; Song, J.; Zhang, M.; Li, P.; Lv, Q. Application of Machine Vision in External Quality Detection and Grading of Citrus in China. Agric. Eng. 2019, 9, 47–51. [Google Scholar]
  44. Cao, L.; Wen, Z. Citrus quality grading on statistical complexity measurement and multifractal spectrum method. J. Zhejiang Univ. (Agric. Life Sci.) 2015, 41, 309–319. [Google Scholar]
  45. Wang, X.; Zhao, Z. Research on the Classification Technology of Citrus Based on Machine Vision. J. Huaihua Univ. 2016, 35, 60–63. [Google Scholar] [CrossRef]
  46. Li, J.; Rao, X.; Ying, Y. Correction algorithm of illumination nonuniformity on fruit surface and defects extraction using single threshold values. Trans. Chin. Soc. Agric. Mach. 2011, 42, 159–163. [Google Scholar] [CrossRef]
  47. Hu, W.; Xiong, J.; Liang, J.; Xie, Z.; Liu, Z.; Huang, Q.; Yang, Z. A method of citrus epidermis defects detection based on an improved YOLOv5. Biosyst. Eng. 2023, 227, 19–35. [Google Scholar] [CrossRef]
  48. Zhu, L. Orange Grading Technology Based on DSP. Master’s Thesis, Zhejiang University of Technology, Hangzhou, China, 2009. [Google Scholar]
  49. Iqbal, S.; Gopal, A.; Sankaranarayanan, P.E.; Nair, A. Estimation of size and shape of citrus fruits using image processing for automatic grading. In Proceedings of the 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, India, 26–28 March 2015; pp. 1–8. [Google Scholar]
  50. Zhang, L. The Key Technologies of Automatic Picking for Citrus in the Natural Environment. Master’s Thesis, North China University of Technology, Beijing, China, 2020. [Google Scholar]
  51. Bi, S.; Zhang, L. Research on Method of Citrus Picking Point Recognition in Natural Environment. Comput. Simul. 2021, 38, 227–231. [Google Scholar]
  52. Tang, Y.; Yang, G.; Wang, Y. Improved YOLOv3-tiny lightweight citrus recognition method for picking robot. Sci. Technol. Eng. 2022, 22, 13824–13832. [Google Scholar]
  53. Huang, X. Research on Target Recognition and Path Planning Algorithm of Citrus Picking Based on Machine Vision. Master’s Thesis, Guangdong University of Technology, Guangzhou, China, 2020. [Google Scholar]
  54. Liu, D. Research on Improved of Motion Planning Algorithm for Citrus Harvesting Manipulator in Unstructured Environment. Master’s Thesis, Chongqing University of Technology, Chongqing, China, 2022. [Google Scholar]
  55. Chen, X.; Wang, H.; Luo, Q.; Wang, C.; Qian, W. Optimal path of Citrus sinensis L. picking based on improved ant colony algorithm. J. Anhui Univ. (Nat. Sci. Ed.) 2022, 46, 68–74. [Google Scholar]
  56. Jiang, K. Research on Target Recognition, Location and Picking of Citrus Picking Robot. Master’s Thesis, Jiangsu University, Zhenjiang, China, 2022. [Google Scholar]
  57. Liu, D.; Wang, Y. Motion Path Planning of Citrus Picking Robot Arm Based on Improved Informed-RRT* Algorithm. J. Chongqing Univ. Technol. (Nat. Sci.) 2021, 35, 158–165. [Google Scholar]
  58. Xiong, C.; Xiong, J.; Yang, Z.; Hu, W. Path planning method for citrus picking manipulator based on deep reinforcement learning. J. South China Agric. Univ. 2023, 44, 473–483. [Google Scholar]
  59. Xiong, J.; Zou, X.; Peng, H.; Chen, W.; Lin, G. Real-time Identification and Picking Point Localization of Disturbance Citrus Picking. Trans. Chin. Soc. Agric. Mach. 2014, 45, 38–43. [Google Scholar]
  60. Ning, Z.; Cheng, H.; Yang, H.; Cheng, X. Dynamic recognition of oscillating orange for harvesting robot. J. Jiangsu Univ. (Nat. Sci. Ed.) 2015, 36, 53–58. [Google Scholar]
Figure 1. Traditional method identification process.
Figure 1. Traditional method identification process.
Applsci 13 09334 g001
Figure 2. Common deep learning networks.
Figure 2. Common deep learning networks.
Applsci 13 09334 g002
Figure 3. Flow chart of citrus pest and diseases identification in deep learning methods.
Figure 3. Flow chart of citrus pest and diseases identification in deep learning methods.
Applsci 13 09334 g003
Figure 4. Test diagram of hyperspectral imaging system.
Figure 4. Test diagram of hyperspectral imaging system.
Applsci 13 09334 g004
Figure 5. Citrus grading system.
Figure 5. Citrus grading system.
Applsci 13 09334 g005
Figure 6. Harvesting diagram of citrus-picking robot.
Figure 6. Harvesting diagram of citrus-picking robot.
Applsci 13 09334 g006
Table 1. Comparison of deep learning methods in citrus disease recognition.
Table 1. Comparison of deep learning methods in citrus disease recognition.
RefYearPurposeMethodsCategoryMeritsDemeritsAccuracy
[11]2021Improve recognition accuracyThe ResNet34 model was used to classify the disease, and the depth separable convolutional network TDSC-ResNet was used to identify the disease degreeHuanglongbing,
scab,
black spot,
The model is lightweight and the computation is reducedLack of multi-leaf, multi-fruit recognition88.79%
[12]2020Achieve early diagnosis of citrus pests and diseasesHyperspectral images were collected and input into VGG-16 for classificationCanker,
red spider, sooty mold, glyphosate
High accuracy________98.75%
[13]2022Realize the classification of eight diseasesE-Res2Net algorithm is obtained by adding SE module of attention mechanism to Res2Net algorithmHuanglongbing,
gray mould,
canker,
sooty mold,
lichen, resinosis, sharpie,
aschersonia
The E-Res2Net is 3.91% more accurate than the Res2Net-50Low accuracy86.04%
[14]2021To solve the problem of small samples of preharvest lesions and improve the classification accuracy of postharvest defectsDatasets were amplified using the data enhancement strategy ORGAA; the SSD-Resnet 18 model is proposedCanker,
resin disease, sunburn, anthrax,
huanglongbing
The speed of SSD-ResNet18 model detection is greatly improvedThe classification accuracy is not improved compared with the original modelMap87.89%
[15]2022Identification of citrus diseases in natural SettingsFaster R-CNN regional detection neural network based on binarizationHuanglongbing,
black spot, canker,
scab
The recognition time is 0.53 s higher than that of faster R-CNN networkOnly single blade is identified87.5%
[16]2021Identification of citrus psyllidImproved YOLOv4-Tiny modelPsyllidThe model recognition ability and network detection accuracy are improved________96.16%
[17]2020Citrus disease classification to improve detection efficiencyR-CNN based on 33 layer ResNet main trunk networkHuanglongbing,
red spider, canker
The classification effect is better than that of small neural network model________Respectively
95.31%, 90.23%,
99.2%
[18]2022Identify five common citrus diseasesThe optimized deep learning algorithm combines the YOLO-V4 network (detection) and the EfficientNet network (classification).Canker, anthrax,
sun burn, greening, melanosis
High recognition efficiencyOutdoor algorithm accuracy is lower than laboratory environment89%
[19]2022Identification of citrus pests and diseasesImproved VGG19-INC modelHuanglongbing, leaf miner, nematodeThe weight space occupation decreases and the model training speed increasesThe dataset is insufficient and the model is misidentified95.25%
[20]2021Improve the identification accuracy of citrus diseasesF-ResNet is a new model obtained by the fusion of S-ResNet and M-ResNetHuanglongbing,
black spot, canker,
scab
Huanglongbing,
black spot, canker,
scab
There are few images of black spot and scab, so the model will overfit93.6%
[21]2021To realize rapid and intelligent identification of citrus pests and diseasesThe improved YOLOv4 model was used for leaf detection, and the DenseNet and Xception models were fused for pest classificationRed spider, canker,
leaf miner
High classification accuracyLess variety of diseasesThe detection map was 85.10%, and the classification accuracy was 96.69%
[22]2018Six diseases were detected and classifiedDetection of disease spots and disease classificationAnthracnose, black spot, canker, greening, melanosisFeature fusion improves the segmentation accuracy________95.8%
[23]2021Identify three common pestsEnsemble classifierLeaf miner, sooty mold, pulvinariaHigh accuracyFewer pest species99.04%
Table 2. Comparison of traditional method, deep learning method, and hyperspectral imaging technique.
Table 2. Comparison of traditional method, deep learning method, and hyperspectral imaging technique.
NameDatasetFeature ExtractionMeritsDemerits
Traditional methodRGB imageManually extract
features
Maturational theory, fast calculation speedPoor performance on complex problems
Deeping learningRGB imageNetwork automatic extractionAdaptable,
good learning capacity, good portability
Large amounts of computation, poor portability, model complexity
Hyperspectral imaging
technique
Hyperspectral imageFeature band
extraction
High resolution, nondestructive examination, wide range of informationHigh instrument cost, low sensitivity, large sample quantity
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, K.; Ma, W.; Lu, J.; Tian, Z.; Yang, Z. Application of Machine Vision Technology in Citrus Production. Appl. Sci. 2023, 13, 9334. https://doi.org/10.3390/app13169334

AMA Style

Peng K, Ma W, Lu J, Tian Z, Yang Z. Application of Machine Vision Technology in Citrus Production. Applied Sciences. 2023; 13(16):9334. https://doi.org/10.3390/app13169334

Chicago/Turabian Style

Peng, Kaiqian, Wei Ma, Jinzhu Lu, Zhiwei Tian, and Zhiming Yang. 2023. "Application of Machine Vision Technology in Citrus Production" Applied Sciences 13, no. 16: 9334. https://doi.org/10.3390/app13169334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop