Next Article in Journal
Generative Adversarial Network-Based Anomaly Detection and Forecasting with Unlabeled Data for 5G Vertical Applications
Previous Article in Journal
A Novel Robust Geolocation-Based Multi-Factor Authentication Method for Securing ATM Payment Transactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Application Status and Trends of Machine Vision in Tea Production

1
Modern Agricultural Equipment Research Institute, Xihua University, Chengdu 610039, China
2
School of Mechanical Engineering, Xihua University, Chengdu 610039, China
3
Institute of Urban Agriculture, Chinese Academy of Agriculture Sciences, Chengdu 610213, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10744; https://doi.org/10.3390/app131910744
Submission received: 15 August 2023 / Revised: 23 September 2023 / Accepted: 25 September 2023 / Published: 27 September 2023

Abstract

:
The construction of standardized tea gardens is the main trend in the development of modern agriculture worldwide. As one of the most important economic crops, tea has increasingly stringent requirements placed on its planting capacity and quality. The application of machine vision technology has led to the gradual development of tea production moving towards intelligence and informatization. In recent years, research on tea production based on machine vision technology has received widespread attention, as it can greatly improve production efficiency and reduce labor costs. This article reviews the current application status of machine vision technology in tea pest monitoring, intelligent harvesting, quality evaluation, and classification, and analyzes and discusses specific challenges around machine vision technology in tea production; for example, this technology lacks a standard database and weather interference, training errors in the model, and differences in the hardware computing speed can become a challenge. Based on the current research and application of machine vision technology in various fields, this article looks ahead to the development prospects and future trends of machine vision technology in tea production applications, such as future research to further integrate multiple types of sensors, improvements in the quality and usability of datasets, optimized model algorithms for existing problems, dissemination of research results, and intelligent management of tea production through machine vision technology.

1. Introduction

Tea is the second most popular beverage after water, and according to statistics, global tea production exceeds USD 17 billion annually, while the global tea trade is valued at approximately USD 9.5 billion [1]. As a result, tea is widely grown and consumed throughout the world, especially in Asian countries such as China [2] and India [3]. At the same time, driven by the globalized consumer market, the United States [4] and the European region [5] also have sufficient consumer markets and suitable planting conditions. Tea is mainly divided into five types: white, green, oolong, black, and pu-erh. At present, black tea (fermented tea) and green tea (non-fermented tea) are the most popular globally. Black tea accounts for over 90% of all teas sold in Western countries. The top-grade black teas of the world include Qi Men black in China, Darjeeling and Assam black tea in India, and Uva black tea in Sri Lanka. However, all top-ten famous green teas in the world are produced in China, and Xi Hu Long Jing tea is the most famous among all green teas [6]. An impressive number of scientific publications have been focused on green tea and its major compounds, flavan-3-ols (“catechins”) [7]. The simulation of the impact of climate change on global tea production also shows that tea production in China, India, and Vietnam will increase in the future and that the tea industry will continue to expand [8]. As one of the world’s most important cash crops, tea is subject to ever-increasing demands for plantability and quality. Tea is a high-value-added cash crop, which requires careful management of its production. Therefore, building standardized tea gardens is the main trend of modern agricultural developments. Traditional field management and postproduction processing of tea are labor-intensive industries with an insufficient labor force [9] and a limited harvesting window. Adopting intelligent means to achieve accurate and efficient production is therefore urgent.
Machine vision has been widely applied in tea production in recent years, covering almost all production processes including disease and pest monitoring, identification and picking, yield estimation, and quality grading of tea. Machine vision is based on computer vision and consists of hardware and software systems. Machine vision relies on RGB (red, green, blue) and multispectral cameras, among others, to capture images of tea gardens and extract key information based on the obtained images, providing a basis for subsequent decision making. It can extract information from images by simulating human vision, and then analyze it to guide actual production [10]; the machine vision system is shown in Figure 1.
Machine vision systems generally include three parts: image acquisition, data processing, and deployment and execution [11]. Image acquisition is generally divided into lighting systems, imaging systems, and other equipment, mainly relying on optical components to capture images and transmit them to the next section. The data processing part extracts and analyzes information from images, makes decisions based on learning results, and finally deploys the system to control and execute operations on the equipment. And with the promotion of this technology, many aspects of tea production can be further developed. This article compares the advantages and disadvantages of relevant research on the application of machine vision technology in tea pest detection, picking recognition and positioning, production management, and other aspects. Analyzing the current problems and future research prospects can effectively improve the accuracy and efficiency of various stages of tea production, providing certain reference values for the direction of the development of machine vision technology in agriculture.

2. Pest and Disease Detection

Tea plants prefer high humidity and high temperature, which provide a suitable living environment for crop diseases and insect pests, resulting in a wide variety of tea diseases. According to statistics, there are about 140 types of tea tree diseases in China, infecting the leaves, stems, roots, and flowers of tea trees [12]. Due to the centralized planting of tea gardens, any disease occurrence should be handled in a timely manner. Otherwise, it leads to irreversible economic losses. Due to the wide planting area and high density of tea gardens, using manual observation and early warning of crop diseases and pests is not only time-consuming, but also labor-intensive. In addition, observation results are easily influenced by people’s subjective feelings, because individuals have inconsistent definitions of the degree of infection [13]. On the other hand, the initial symptoms of pests and diseases are not obvious until a large area of plants has already been infected with the disease. In recent studies, researchers have used machine vision to monitor the health levels of tea. The schematic diagram of such a system is shown in Figure 2, and the research results are shown in Table 1.
The process of disease diagnosis through machine vision generally includes (1) image acquisition; (2) image preprocessing; (3) data processing; (4) network training; (5) and hardware deployment and use.

2.1. Disease Detection

In the process of the development of tea diseases, the images of diseased leaves are more obvious and easier to obtain than those of insect pests and have a deeper impact on the quality of tea. In 2017, Wang [14] extracted 34 feature components of tea diseases through adaptive filtering, color space transformation, Otsu image segmentation and other methods, and then used BP neural network, support vector machine (SVM) and random forest methods for training and selected the comprehensive optimal model. In 2019, Lin et al. [15] used the Canny edge detection algorithm to extract the contour of tea disease spots. They combined the Otsu method and construction index method with contour information to extract the features of three tea tree disease images: tea round brown spot, tea red leaf spot, and tea white spot. Ultimately, they achieved the best efficiency and recognition effect, but did not consider the color changes in different disease stages. Sun et al. [16] proposes an algorithm that combines simple linear iterative clustering (SLIC) with SVM. By extracting multidimensional features of pixel blocks segmented by SLIC, SVM is used for classification, and the image is then repaired and segmented, so that it can effectively extract tea disease images under complex backgrounds. Sun et al. [17] used multiple preprocessing combinations, using k-means and minimum N × The N method extracts lesions and performs super-resolution reconstruction, with the highest accuracy reaching 93.3% in the AlexNet network model. However, there is a problem of uneven distribution in the number of training images. Hu et al. [18] proposed to apply low shooting learning to tea disease recognition in order to solve the problem of the difficulty in the acquisition of disease images and of the small sample size of the dataset. After C-DCGAN data augmentation, support vector machines were used for training, exhibiting an accuracy of up to 90%. The small-sample experimental effect was significantly improved. In 2020, Mukhopadhyay et al. [19] proposed a clustering recognition method for tea disease area images based on non-dominant sorting genetic algorithm (NSGA-II), which effectively improved the detection effect and provided an online detection system. In 2021, research carried out by Hu et al. [20] aimed to detect and grade the severity of leaf blight on natural backgrounds. Firstly, the Retinex algorithm was used to enhance the original image, and a deep learning framework of “region based fast convolutional neural network” was used for detection. Then, the detected image was imported into the VGG16 network for training. The experimental results showed that compared with classical machine learning methods, the average detection accuracy and severity classification accuracy of this method have been improved by more than 6% and 9%, respectively. Similarly, in order to solve the problem that the traditional convolutional neural network cannot accurately and quickly recognize tea diseases due to a small and uneven distribution of tea disease samples, in 2022, Li et al. [21] proposed a tea disease recognition method based on transfer learning, used the PlantVillage dataset to obtain a pretraining model, and trained it in the improved DenseNet model. Under the situation of small samples and uneven distribution of samples, the recognition accuracy of tea diseases reached 92.66%. This model has high recognition accuracy and strong robustness and can provide a reference for intelligent diagnosis of tea diseases.
Based on the above methods, color space segmentation models will select some components for further analysis, such as the G component of RGB, the H or S component of HSV(hue, saturation, value), or the a and b components of the lab model. However, due to the large variation in light intensity in the natural environment, the segmentation effect based on fixed thresholds cannot meet the actual needs of the field. With the development of neural network models, deep learning models trained on diverse datasets have gradually become the mainstream in research and achieved better results.
With the development of modern agriculture, spectral imaging technology has gradually matured and been applied to the identification of crop diseases. Spectral imaging technology combines spectral analysis technology with image analysis technology to conduct qualitative and quantitative analysis of the tested object. In recent years, the application of research on hyperspectral and multispectral methods in tea diseases has gradually increased. Zhang et al. [22] proposed a tea spot recognition method based on the fusion of hyperspectral imaging technology and image processing technology. By extracting the relative spectral reflectance of sensitive bands in the region of interest as spectral features, the second principal component image after the second principal component analysis is determined as the feature image. The color and texture features of the feature image are extracted based on the color moments and grayscale co-occurrence matrix. Then, the BP neural network, optimized using the genetic algorithm, was used to improve the recognition rate to 94%, which can also provide a reference for low-altitude disease detection of plant protection drones. Lu et al. [23] studied tea red leaf disease at different disease stages, collected fluorescence spectrum information through fluorescence transmission technology, and established prediction models for feature spectrum joint grayscale co-occurrence matrix texture and LBP operator texture using an extreme learning machine (ELM), which improved the recognition rate of diseases at different disease stages. Liu et al. [24] applied chlorophyll fluorescence spectral feature analysis to tea disease detection. Through comparison of results, the model established by principal component analysis (PCA) combined with linear discriminant analysis (LDA) showed the best performance, with a recognition rate of 98.9%. Integrating hyperspectral and multispectral technologies into machine vision for disease identification can reduce the impact of light and enable the use of drones and other platforms to reduce damage to tea.
Table 1. Studies on different methods for identification of tea diseases.
Table 1. Studies on different methods for identification of tea diseases.
Ref.YearDisease TypeTaskColor ModelMethodAdvantageDisadvantage
Wang [14]2017Tea white spot, tea brown leaf spot, tea cloud leaf blightIdentification and classification of three kinds of diseasesHSVThe original image is processed through filtering and Otsu segmentation, 34 components are extracted, and then the optimal scheme is obtained through training with BP networkIt can be applied to the task of recognition under complex backgroundsEach of the three algorithms has its own advantages and are not better combined
Lin et al. [15]2019Tea red leaf spot, tea white spot, tea round red spotIdentification and classification of three kinds of diseasesHSVThreshold iterative algorithm, maximum inter-class variance method, K-nearest neighbor algorithmThe recognition rate is 93.33%The influence of color characteristics needs to be further studied
Sun et al. [16]2019Five diseases such as tea anthracnose, tea brown leaf blight, and tea net bubble blightSegmentation of disease areaNoneThe image is segmented via simple linear iterative clustering, and then trained and segmented through svm.The segmentation effect is good, and the background can be eliminated quickly and effectivelyThere is room for improvement in the parameters of svm.
Sun et al. [17]2019Tea ring spot, tea anthracnose, tea cloud leaf blightIdentification and classification of three kinds of diseasesL*, a*, b*Uses 7 preprocessing methods, and then uses AlexNet network trainingThe accuracy can reach 93.3% in the highest combination modeThe number of samples is too small
Hu et al. [18]2019Tea red scab, tea red spot, tea blightIdentification and classification of three kinds of diseasesRGB
2R-G-B
Uses C-DCGAN to enhance the training samples, and then uses vgg16 network for trainingCan be used in small sample casesOnly support vector machines are used
Mukhopadhyay et al. [19]2020Five diseases such as red rust, red spider disease, and thrips diseaseAutomatic detection and identification of diseasesHISClustering recognition method of tea disease region image based on non-dominated sorting genetic algorithm (NSGA-II)The detection effect is good, and a cloud system is providedThere is still room for improvement in the function of clustering algorithm
Hu et al. [20]2021Tea blightDetect, identify, and estimate the severity of the diseaseNoneThe original image is enhanced using Retinex algorithm, and then VGG16 network is usedCompared with the classical machine learning method, the average detection accuracy and severity classification accuracy of this method are improved by more than 6% and 9%, respectivelyOnly one disease of tea blight was studied
Li et al. [21]2022Five kinds of diseases, such as tea white spot and tea ring spotRecognition and classification of data with small samples and uneven distributionNoneThe pretraining model is obtained through pretraining using PlantVillage dataset, and the training is carried out in the improved DenseNet modelIt can effectively alleviate the influence of uneven sample distribution on the model performance and improve the accuracy of the modelThe disease condition grade has not been studied
Zhang et al. [22]2017Anthracnose, red leaf spot, tea white spotSearching for the optimal spectral characteristics of disease recognitionNoneImage features are extracted based on color moment and gray level co-occurrence matrix, and then BP neural network optimized via genetic algorithm is trained.The spectral characteristics composed of relative spectral reflectance of 560, 640, and 780 nm have significant effect on the classification of tea diseasesUnder the influence of light and background under natural conditions, the recognition efficiency is low
Lu et al. [23]2019Red leaf diseasePrediction of tea red leaf disease by fluorescence transmission techniqueNoneThe prediction models of feature spectrum combined with gray co-occurrence matrix texture and LBP operator texture are established by extreme learning machine (ELM)The recognition rate of diseases in different stages is improvedThe experiment was carried out only in the laboratory environment
Liu et al. [24]2021Tea algal spotEstablishment of chlorophyll fluorescence spectrum combined with chemometrics recognition modelNoneThe model effect of principal component analysis (PCA) combined with linear discriminant analysis (LDA)The recognition speed is fast, and the accuracy is as high as 98.9%The experiment was carried out only in the laboratory environment

2.2. Pest Detection

At present, the detection and classification of crop pests in China is still in its early stages, and there is a lack of dedicated disease and pest datasets for tea to use in research. Currently, pest monitoring of field crops still relies on manual experience for judgment. Farmers and management personnel can often classify captured pests based on their experience, but for rare pests, it is often difficult for farmers to identify them. There are six main pests in tea trees, namely, tea sesame beetle, tea geometrid, tea geometrid larvae, false-eyed small green leafhopper, copper-green golden turtle, and tea horned bug. Among them, tea geometrid and grey tea geometrid are the main pests of Chinese tea trees, seriously endangering the yield and quality of tea and causing huge economic losses to tea production [25].
With the continuous development of machine vision-based algorithms such as support vector machines and neural networks, the monitoring technology of pests has also made great progress. However, its application in the tea industry is still in its early stages. In 2009, Zhang et al. [26] extracted 17 morphological features from binary images of stored grain pests and normalized them in their research on image recognition. Then, they used the ant colony algorithm to automatically extract seven optimal feature subspaces from them. Finally, they chose a support vector machine classifier to classify nine types of grain pests, with a recognition rate of over 95%. Luo et al. [27] proposed a method for identifying rice stem borer pests based on a convolutional neural network model. After image preprocessing, images containing rice stem borer adults, larvae, eggs, or pupae were selected, and a 10-layer convolutional neural network model was designed with an accuracy of 89.14%. Yan [28] used the improved YOLOv3 to detect 12 types of tea pests, and made relevant improvements to the model in terms of lightweight and sample imbalance. Compared with the preimprovement version, the accuracy can be improved by 3 to 6 percentage points. Xu Rui et al. [29] used a small green leafhopper insect detection and reporting system and a precise target insect pest sound and light prevention and control system for a comparative experiment, as shown in Figure 3. The results showed that based on the color plate trapping and counting mode, the recognition accuracy can reach over 89.29%. In the tea garden used for the experiment on precise sound and light prevention and control, the rate of decline in the insect population of the small green leafhopper reached 76%. The model can achieve real-time prediction of tea garden pest dynamics, achieve efficient and precise control of target pests in tea gardens, and achieve zero application of chemical pesticides throughout the year in experimental tea gardens.
The tea disease and pest detection system is easily affected by many factors such as light intensity, background, camera resolution, etc. Therefore, combining multiple types of cameras and various intelligent sensor devices to achieve real-time, efficient, and comprehensive detection of plant diseases and pests is one of the future research directions that can promote agricultural modernization and intelligence. To segment pest and disease images in complex backgrounds, it is often necessary to combine several segmentation methods to achieve ideal segmentation results. Different types of pests and diseases often require different features to be extracted, among which multifeature fusion can obtain more comprehensive information and has great development potential. In addition, how to combine new algorithms and other deep learning structures to optimize and train the model while improving accuracy and accelerating computation speed will remain a crucial and challenging research topic in the future.

3. Intelligent Tea Picking

With the gradual upgrading and development of the global tea industry, tea production, as a labor-intensive industry, is constrained by the scarcity of labor. The development of mechanized tea picking has started to accelerate. Since Japan developed the world’s first mobile tea picking machine in 1960, research on mechanized tea picking has become a hot topic and has gradually matured [30]. However, due to the different eating habits and cultural backgrounds of different countries, the mainstream tea-picking research worldwide pursues efficiency and speed, without considering the integrity of buds and leaves. Therefore, traditional mechanical picking and extensive harvesting operations cannot meet the Chinese standards for high-quality tea picking. Famous and high-quality tea relies heavily on manual picking, with a low mechanization rate. The mechanization rate of tea garden management operations in China is less than 10% [31], and the labor force for tea picking operations accounts for over 60% of the total tea garden management labor force. It is the most frequently performed and labor-intensive task in the entire tea production labor force [32]. With the economic development in China, the efficiency and quality requirements for high-quality tea picking are also increasing. As an emerging discipline, machine vision has encouraged more and more scholars to participate in the research of its application in tea picking production.

3.1. Target Recognition and Localization

The effective picking of tea buds cannot be achieved without precise positioning. Scholars have used machine vision technology to study the target recognition and positioning of tea buds. Target recognition is the process of separating tea buds from the background. It is a prerequisite for the robotic arm to automatically pick and complete the task. At the same time, calculations are performed between global, camera, and image coordinates to obtain the spatial coordinates of the target. Then, the robot’s robotic arm moves to that position and carries out harvesting. The process of applying machine vision to the robot is shown in Figure 4.
Most of the detection of tea shoots in China is based on image processing methods, which use the differences in background information between tea shoots and old leaves, branches, and trunks to extract features such as color, shape, and texture for research. Based on this, a model is established for analysis, detection, and localization. Yang et al. [33] conducted segmentation and detection experiments on tea buds on a single background, extracting the G component of the tea image in the RGB color space and using a dual threshold method to segment the image. Then, based on the shape characteristics of the tea buds, the edges of tea buds were detected, and the recognition accuracy for tea buds was 94%. Wang et al. [34] transformed the original RGB image into an HIS (hue, lightness, saturation) color model and segmented it using the regional growth method by selecting H and S parameters. The accuracy was 94.6% at a shooting angle of 45°, but the method was time-consuming. Wu et al. [35], based on the lab color model and K-means clustering method, extracted the a and b components from the model for analysis, and achieved an average recognition rate of about 94%, with good preservation of leaf integrity. After 2015, with the rise of deep learning algorithms, Sun [36] first applied the yolov3 algorithm, which had been improved through multiscale detection, to tea bud detection. Before conducting network training, image segmentation was performed on tea bud images under complex backgrounds by combining super-green features and the OSTU algorithm. It was found that the detection accuracy of tea bud images under complex backgrounds was high. Guo et al. [37] proposed an improved YOLOv4 object detection model with the complex background of other tea leaves to predict the picking points of buds. The improved hybrid attention mechanism was introduced to make the tea bud detection model better adapted to the feature information of tea buds. Lv et al. [38] solved the problem of missed detection caused by poor image contrast and weakened features under natural lighting by locally adapting gamma brightness based on the average grayscale value of the image. Experiments have shown that the model has strong robustness to light intensity. In the actual tea picking and production process, the problem of tender tea leaves being obstructed by leaves and the difficulty in separating tender buds from the background color have always been key technical difficulties. Tao et al. [39] designed a segmented tea picking structure for tender tea bud image collection. By moving the image collection platform, the tender leaves of tea leaves are exposed from complex backgrounds, solving the problems of tea occlusion and difficulty in separating tender buds due to their similar color to the background. Then, the YOLOv4 network was used to detect tea buds, and the final recognition accuracy of tea buds reached 90%. However, currently, only image collection experiments have been conducted, but no in-depth research has been conducted on the coordinate positioning of picking points.
The identification of tea buds is a prerequisite for achieving intelligent picking, and the extraction of picking position information through the identified positioning is also a key production technology point for achieving intelligence, precision, and efficiency. When collecting images in the natural environment of the field, it is inevitable that the bud tip is covered by branches and leaves, disturbed by wind, and mechanically vibrated. At the same time, natural light can also have a significant impact on the image, as shown in Figure 5, which poses great challenges for target positioning. In recent years, domestic and international scholars have proposed many effective methods for the research of 3D positioning based on machine vision, among which widely used methods include 3D positioning based on monocular color camera, stereo vision matching, depth camera, laser rangefinder, time of flight (TOF) of optical 3D camera, etc.
In the research on tea picking positioning, the current mainstream choice is to first process RGB images, analyze and obtain two-dimensional positions through deep learning networks, and then combine the depth image comparisons to perform three-dimensional positioning of picking points. The first-order deep learning object detection method represented by YOLO and SSD exhibits high speed and can avoid false positives caused by background errors. After learning the advantages of generalization features of objects, Chen [40] extracted tea bud images using the PSO-SVM algorithm, and then used the yolov3-416 model to locate picking points and determine two-dimensional coordinates. Finally, binocular ranging was used to convert three-dimensional coordinates, with a segmentation accuracy of over 94% and an average time of about 1 s. Xu et al. [41] used the improved YOLOV4 algorithm for coarse extraction of the foreground, and then used RGB HSV color conversion to obtain the contour of the extracted bud leaf area. Based on morphological algorithms, the picking point was located. Zhang [42] identified the bud leaf area in RGB images using the YOLOv4 bud leaf recognition model, and extracted the main contour of the bud leaf using the H-S manual threshold image segmentation method in the HSV color space. Using the “shoot corrosion method” to locate the two-dimensional pixel coordinates of the picking points in the main contour, they were able to bring the two-dimensional pixel coordinates of the picking points into the depth image to obtain the depth distance data of the picking points. Based on the internal parameters calibrated by the depth camera and the camera imaging matrix, they could calculate and obtain the three-dimensional positioning of the bud and leaf picking points. Three different methods were compared in the article, as shown in Table 2.
The second-order deep learning object detection method represented by R-CNN is an excellent deep learning algorithm with high accuracy (positioning, detection rate) and advantages such as shared computing power. Chen et al. [43] used an improved Fast R-CNN to extract the foreground, then used FCN-16 for image segmentation of the picking area, and finally used the centroid algorithm to obtain the picking points. This method has high detection accuracy but does not handle occlusion and other situations well. Wang et al. [44] used Mask RCNN as the model to establish a tea bud, tea leaves, and tea picking point recognition model. It added a semantic segmentation function to the faster RCNN and directly segmented the picking point area. The overall accuracy and recall rates were 93.95% and 92.48%. However, the running efficiency of the model still needs to be improved, and there are still some errors in determining the selected area. The optimal effect can only be achieved at a specific angle. Yan et al. [45] established an improved mask R-CNN model to identify the main shooting parts by calculating the areas of multiple connected domains of the mask. Then, they calculated the minimum bounding rectangle of the main part, determined the tea axis, and finally obtained the position coordinates of the sampling point. Yan et al. [46] eliminated background interference by identifying binary maps of a single bud, a single leaf, and two leaves of tea, and then found corner points using the Shi Tomasi algorithm, selecting the lowest point as the picking point. But currently, these two improved mask R-CNN models can only determine the two-dimensional pixel coordinates of the image.
Zhang [47] installed a CMOS image sensor in conjunction with a tea cutter at the end of the SCARA robotic arm and proposed a “one eye multi position” 3D stereo vision system. After multiple displacements and shots, the matching similarity can reach 90%, providing theoretical basis and technical support for intelligent tea picking. Wang et al. [48] used the first-order radial distortion algorithm based on the multi-pose calibration method for binocular cameras [49] to obtain three-dimensional coordinates, and then used the hand–eye calibration method to identify the change relationship. The experimental results show that the proposed algorithm has high recognition accuracy, with an accuracy rate of 97.5%, and lower positioning error, with a maximum error of only 1.33 cm. It can be further evaluated for autonomous harvesting operations. Compared with the above methods, the deep learning model based on convolutional neural networks has good recognition performance. Combined with deep cameras, it can meet the requirements of tea bud recognition and localization under natural light after enriching the dataset, providing strong technical support for the development of smart agriculture. Point cloud technology has a good effect on plant pose detection. To automatically pluck tea shoots in fields, Li et al. [50] developed an algorithm framework that includes detection and localization. The YOLOv3 model was used to detect tea shoots, whereas the plucking position was obtained by combining the growth characteristics and a sleeve scheme. The point cloud algorithm can effectively solve the error caused by occlusion, and by analyzing and detecting the point cloud in the target area, the amount of data is reduced, greatly improving the positioning speed. Chen et al. [51] combined the proposed detection model with OPVSM to provide a reliable and effective method for tea bud detection and pose estimation. This study has the potential to be used in tea picking automation and can be extended to other crops and objects for precision agriculture and robotic applications.
From the above methods, in the identification and positioning research of tea picking, the binding of energy based on deep learning and depth cameras can achieve better results in the complex outdoor environment, but the complete picking operation is a complex endeavor requiring multidisciplinary integration. From control of the walking mechanism to the final picking hand design and overall mobilization, more research is still needed to continue to explore new options.

3.2. Tea Production Estimation

In agriculture, crop yield is the standard measure and direct indicator of agricultural yield per unit area of land, and it is also the ultimate goal of agricultural production. Estimating crop yield is one of the important tasks in modern agriculture. Currently, commonly used yield estimation methods include field sampling surveys, agricultural meteorological model yield estimation [52], spectral-index-based crop yield estimation [53], and image-based crop yield estimation [54]. Li [55] found through comparative research that the overall performance of faster RCNN based on the principle of object detection and counting is more successful in the recognition and counting of tea buds, which can provide application value for tea garden yield estimation in natural environments. However, the model still has the problem of missing the detection of tea leaf buds under a large shooting area. Xu et al. [56] studied the yield estimation problem of small-scale tea gardens in hilly areas based on Yolov5’s target detection algorithm and field sampling survey method. By combining the sampling area, the average growth density of tender buds can be calculated, and the relative error between the estimated tender tea bud yield and the actual harvest yield is 29.56%. This study can, conveniently, estimate the yield of tea shoots in tea gardens, provide yield-related data support for farmers during the tea growth period, and facilitate the early management of tea production.

4. Production Management of Tea

4.1. Quality Evaluation and Grading of Tea

In recent years, the application of machine vision to the task of tea variety recognition and quality detection has gradually increased and much has been achieved.
In the process of tea production, the grading of tea buds fundamentally determines the quality of the product. Currently, this is mainly achieved through artificial sensory evaluation methods [57], and the results obtained are easily influenced by human subjectivity and environmental factors. Nowadays, although computer vision technology continues to intersect with agricultural engineering disciplines, existing research on tea bud classification is still very limited. In 2003, Borah et al. [58] first studied tea soup color images and analyzed them using traditional computer image processing methods such as histogram dissimilarity measurements and single-level artificial neural network classification. The final classification rates reached 74.67% and 80%. In 2016, Gao [59] used a BP neural network to construct a tea classifier to test three types of color and shape features, with an accuracy of over 90%. Kim et al. [60] classified the quality of green tea by first using the mean shift method to divide the image into multiple different clustering regions, and then selecting representative regions from these larger regions. Then, using the method of region merging, the elliptical model was used to effectively classify the tea leaves of green tea; in 2018, Song Yan et al. [61] established a Qimen black tea classification model based on shape feature histograms and support vector machines. They extracted eight geometric features, including absolute and relative geometric features, and generated shape feature histograms. The histogram distribution was input as a feature vector into the classifier for classification, and the combination with LSSVM had the best classification effect, with an accuracy of 95.71% for Qimen black tea classification. With the gradual application of spectral technology, Maram et al. [62] and others tried to evaluate the quality of green tea in South Asia and East Asia through ultraviolet (UV) visible, Fourier transform infrared spectroscopy (FTIR), high-performance liquid chromatography (HPLC), and other methods, combined with some chemometric methods. UV spectral data combined with chemometrics can effectively achieve the authentication and differentiation of green tea samples from South Asia and East Asia. Chen et al. [63] combined multispectral images with chemometrics and used threshold segmentation and manual sampling methods to remove the image background and construct spectral features, demonstrating a significant correlation between the spectral features of the fresh tea leaf canopy and tea quality parameters. Chen et al. [64] introduced a reflectance-based multispectral imaging system to detect sugar adulteration in black tea instead of conventional methods, including and titrimetric methods, which are destructive, time-consuming and expensive. Huang et al. [65] built a light-weight tea bud grading model by embedding a multiscale convolutional block attention module into the ShuffleNet v20.5x network and introducing multiple dimensional scaling (MDS). By focusing on the feature information in the tea samples, the grading accuracy of each sample in the test set reached 100%, 92.70%, and 89.89%, demonstrating better comprehensive performance than using complex network models; this has significant promise for guiding future deployment of networks in low-performance or mobile devices. Identifying the variety of tea plants is of great importance in quality inspection and resource protection in the tea industry. CAO et al. [66] achieved discrimination of 16 types of high-yield tea plant varieties using a multispectral camera. The successive projection algorithm (SPA) was used to analyze the original parameters. The accuracy of SPA, based on fused data and combined with the SVM classification model in training, testing, and validation sets, reached 97.00%, 90.52%, and 88.67%. This research also provided a new procedure for tea plant phenotype identification under field conditions.

4.2. Farm Management Information System

Intelligent farming equipment is becoming a new trend in modern agriculture. Through intelligence and automation, planting is becoming data-driven, leading to more timely and cost-effective production and management of farms, and improving the quality and yield of agricultural products. The advent of a range of modern, intelligent agricultural machines has largely replaced traditional manual planting and harvesting, alleviating the challenges to agricultural sustainability posed by labor shortages and rising production costs. Zhang et al. [67] proposed an intelligent tea picking machine based on active computer vision and Internet of Things (IoT) technology, which can be used for various quality and production evaluations by transmitting real-time job statuses to the Internet for comprehensive data analysis. Lu et al. [68] proposed a cloud platform method for tea bud segmentation and picking point positioning based on machine vision, which uses the cloud platform for data sharing and real-time calculation of tea bud coordinates, reducing the computational burden on picking robots.
The Internet of Things has made it possible to implement farm management information systems. Agriculture, especially arable farming, can be data-driven to achieve more timely and cost-effective agricultural production and management, while reducing environmental impacts [69].

5. Application Challenges and Trends

Conducting research on the application of machine vision technology to tea production and gradually achieving intelligent control and refined management of the entire process of the tea industry is of great significance for ensuring the high yield, quality, efficiency, safety, and health of the tea industry, and achieving sustainable development of the industry. With the unremitting efforts of scholars and research institutions, breakthroughs have been made in research on intelligent tea production based on machine vision. A series of superior algorithms have been developed and used, and future research should mainly include the following focus areas:
  • Combination of multiple types of sensors.
At present, most of the tea industry applications based on machine vision technology are based on visible light range images. However, a single-vision algorithm often cannot meet the information collection needs in the complex natural environment of tea planting, and a machine vision system with multiple types of sensors participating in information integration will fill this gap. The application of hyperspectral and multispectral cameras has been widely tested and applied in the areas of tea diseases, pests, and grading; technologies such as drones, space remote sensing, and 3D point clouds based on depth cameras have expanded applications such as growth information monitoring and yield estimation; in the future, with the development of sensor technology, machine vision systems combining multiple types of sensors will become a research and application avenue with great potential for development.
2.
Establishment of multilevel standard datasets.
In the current production and application of research on tea, due to different tea varieties and researchers, the generality of datasets is not strong, algorithms and datasets are often for a specific region or variety, and there is a lack of multitype comparison. Referring to examples in other areas of machine vision, in the future, we should collect tea images of different varieties, from different production seasons, at different price grades, from different production areas, and under different light, so as to expand the dataset of tea image samples and enrich the diversity of tea samples. A database of multi-variety and multigrade tea bud leaves should be established to improve the universality of the algorithm.
3.
Research on identification and location strategies.
Because of the natural growth of tea buds, they are located at different heights and planes, which often leads to occlusion at different shooting angles, and severe occlusion can easily occur in the edge areas of the same image. Meanwhile, tea buds are small targets that grow on multiple branches, with complex positional information. Due to the vibration in the natural environment caused by natural wind and mechanical picking, there are great challenges in the identification and coordinate positioning in picking. Research on addressing these recognition and target localization challenges will become a major hot topic in future research.
4.
Research on joint application of multiple systems.
In addition to network construction and model training, the installation and application in production environments are also important aspects. In addition to being limited by the accuracy and efficiency of image analysis algorithms and network transmission rates, performance differences caused by different hardware devices can also have an impact on the entire system. In addition, from the return of software information to the specific functional implementation of hardware, the joint operation of multiple systems is still full of challenges. Most researchers often focus on solving a certain part of the problem and improving performance, leading to the inability to effectively solve practical application problems. In the future, research should be strengthened in the field of multisystem collaboration, and the combination of scientific research and actual production should be further strengthened.

6. Conclusions

This article reviews the research progress in terms of the detection of diseases and pests in tea production, tea harvesting, and quality testing and grading. In terms of disease and pest detection, traditional machine learning methods, deep learning methods, and hyperspectral imaging techniques were mainly discussed, and the commonalities and advantages of related research were compared. As the most mainstream method at present, the importance of deep learning cannot be underestimated. During the harvesting process, agricultural robots need to identify and accurately locate targets, so that the robot arm can grasp the target object. Finally, the problems and future research prospects in this field are pointed out. The lack of open-source standard tea bud datasets means that this model cannot become better at training other models and extracting image features, resulting in low recognition accuracy. Due to the diversity of outdoor environments, there may be certain errors in the positioning and recognition of tea-picking robots, which reduce the accuracy and efficiency of picking. In the process of online model training, the allocation of computer hardware also plays an indispensable role, as good hardware equipment is a fundamental condition for success, and high-precision and efficient algorithms are equally important. The above issues have to some extent constrained the research on machine vision technology in the field of tea production. Nevertheless, machine vision technology still has great development prospects and is an irreplaceable trend in the agricultural field. In the future, whether it is tea or other crops, machine vision technology will play an immeasurable role.

Author Contributions

Conceptualization, Z.Y. and W.M.; writing—original draft preparation, Z.Y. and Z.T.; writing—review and editing, W.M., J.L. and K.P.; visualization, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Sichuan Provincial Science and Technology Plan Project (2022YFG0147). The Agricultural Science and Technology Innovation Program (ASTIP-CAAS) ASTIP-Y2021XK08. The Agricultural Science and Technology Innovation Program (ASTIP-CAAS) ASTIP2023-34-IUA-10.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Food and Agriculture Organization of the United Nations. International Tea Market: Market Situation, Prospects and Emerging Issues; Food and Agriculture Organization of the United Nations: Rome, Italy, 2022. [Google Scholar]
  2. Pan, R.; Zhao, X.; Du, J.; Tong, J.; Huang, P.; Shang, H.; Leng, Y. A Brief Analysis on Tea Import and Export Trade in China during 2021. China Tea 2022, 44, 6. [Google Scholar]
  3. Parida, B.R.; Mahato, T.; Ghosh, S. Monitoring tea plantations during 1990–2022 using multi-temporal satellite data in Assam (India). Trop. Ecol. 2023; Online ahead of print. [Google Scholar] [CrossRef]
  4. Clarke, C.; Richter, B.S.; Rathinasabapathi, B. Genetic and morphological characterization of United States tea (Camellia sinensis): Insights into crop history, breeding strategies, and regional adaptability. Front. Plant Sci. 2023, 14, 1149682. [Google Scholar] [CrossRef] [PubMed]
  5. Carloni, P.; Albacete, A.; Martínez-Melgarejo, P.A.; Girolametti, F.; Truzzi, C.; Damiani, E. Comparative Analysis of Hot and Cold Brews from Single-Estate Teas (Camellia sinensis) Grown across Europe: An Emerging Specialty Product. Antioxidants 2023, 12, 1306. [Google Scholar] [CrossRef] [PubMed]
  6. Pan, S.Y.; Nie, Q.; Tai, H.C.; Song, X.L.; Tong, Y.F.; Zhang, L.J.F.; Wu, X.W.; Lin, Z.H.; Zhang, Y.Y.; Ye, D.Y.; et al. Tea and tea drinking: China’s outstanding contributions to the mankind. Chin. Med. 2022, 17, 27. [Google Scholar] [CrossRef]
  7. Da Silva Pinto, M. Tea: A new perspective on health benefits. Food Res. Int. 2013, 53, 558–567. [Google Scholar] [CrossRef]
  8. Beringer, T.; Kulak, M.; Müller, C.; Schaphoff, S.; Jans, Y. First process-based simulations of climate change impacts on global tea production indicate large effects in the World’s major producer countries. Environ. Res. Lett. 2020, 15, 034023. [Google Scholar] [CrossRef]
  9. Zhen, D.; Qian, Y.; Zhou, W.; Xu, S.; Sun, B. Discussion on the shortage of Tea picking and Labor in Songyang. China Tea 2017, 39, 2. [Google Scholar]
  10. Wang, H.; Gu, J.; Wang, M. A review on the application of computer vision and machine learning in the tea industry. Front. Sustain. Food Syst. 2023, 7, 1172543. [Google Scholar] [CrossRef]
  11. Zhu, Y.; Ling, Z.; Zhang, Y. Research progress and prospect of machine vision technology. J. Graph. 2020, 41, 20. [Google Scholar]
  12. Chen, X. The Occurrence Trend and Green Control of Tea Diseases in China. China Tea 2022, 44, 7–14. [Google Scholar]
  13. Datta, S.; Gupta, N. A Novel Approach for the Detection of Tea Leaf Disease Using Deep Neural Network. Procedia Comput. Sci. 2023, 218, 2273–2286. [Google Scholar] [CrossRef]
  14. Wang, J. Research on The Identification Methods of Tea Leaf Disease Based on Image Characteristics. Master’s Thesis, Nanjing Agricultural University, Nanjing, China, 2017. [Google Scholar]
  15. Lin, B.; Qiu, X.; He, Y.; Zhu, X.; Zhang, Y. Research on Intelligent Diagnosis and Recognition Algorithms for Tea Tree Diseases. Jiangsu Agric. Sci. 2019, 47, 7. [Google Scholar]
  16. Sun, Y.; Jiang, Z.; Zhang, L.; Dong, W.; Rao, Y. SLIC_SVM based leaf diseases saliency map extraction of tea plant. Comput. Electron. Agric. 2019, 157, 102–109. [Google Scholar] [CrossRef]
  17. Sun, Y.; Jiang, Z.; Dong, W.; Zhang, L.i.; Rao, Y.; Li, S. Image recognition of tea plant disease based on convolutional neural network and small samples. Jiangsu J. Agric. Sci. 2019, 25, 48–55. [Google Scholar]
  18. Hu, G.; Wu, H.; Zhang, Y.; Wan, M. A low shot learning method for tea leaf’s disease identification. Comput. Electron. Agric. 2019, 163, 104852. [Google Scholar] [CrossRef]
  19. Somnath, M.; Munti, P.; Ramen, P.; Debashis, D. Tea leaf disease detection using multi-objective image segmentation. Multimed. Tools Appl. 2020, 80, 753–771. [Google Scholar]
  20. Gensheng, H.; Huaiyu, W.; Yan, Z.; Mingzhu, W. Detection and severity analysis of tea leaf blight based on deep learning. Comput. Electr. Eng. 2021, 90, 107023. [Google Scholar]
  21. Li, Z.; Xu, J.; Zheng, L.; Tie, J.; Tie, J. Small sample recognition method of tea disease based on improved DenseNet. Trans. Chin. Soc. Agric. Eng. 2022, 38, 182–190. [Google Scholar]
  22. Zhang, S.; Wang, Z.; Zou, X.; Qian, Y.; Yu, L. Recognition of tea disease spot based on hyperspectral image and genetic optimization neural network. Trans. Chin. Soc. Agric. Eng. 2017, 33, 8. [Google Scholar]
  23. Lu, B.; Sun, J.; Yang, N.; Wu, X.; Zhou, X. Prediction of Tea Diseases Based on Fluorescence Transmission Spectrum and Texture of Hyperspectral Image. Spectrosc. Spectr. Anal. 2019, 39, 7. [Google Scholar]
  24. Liu, Y.; Lin, X.; Gao, H.; Wang, S.; Gao, X. Research on Tea Cephaleuros Virescens Kunze Model Based on Chlorophy II Fluorescence Spectroscopy. Spectrosc. Spectr. Anal. 2021, 41, 6. [Google Scholar]
  25. Sun, Y. Study of Pest Information for Tea Plant Based on Electronic Nose. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2018. [Google Scholar]
  26. Zhang, H.; Mao, H.; Qiu, D. Feature extraction for the stored-grain insect detection system based on image recognition technology. Trans. Chin. Soc. Agric. Eng. 2009, 25, 126–130. [Google Scholar]
  27. Liang, W.; Cao, H. Rice Pest Identification Based on Convolutional Neural Network. Jiangsu Agric. Sci. 2017, 45, 4. [Google Scholar]
  28. Yan, Z. Research on Identification Technology of Tea pests Based on Deep Learning. Master’s Thesis, Chongqing University of Technology, Chongqing, China, 2022. [Google Scholar]
  29. Xu, R.; Jin, Z.; Luo, L.; Feng, H.; Fang, H.; Wang, X. Population Monitoring of the Empoasca onukii and Its Control with Audio-optical Emitting Technology. China Tea Process. 2022, 3, 28–33. [Google Scholar] [CrossRef]
  30. Wang, X.; Tang, D. Research Progress on Mechanical Tea Plucking. Acta Tea Sin. 2022, 63, 275–282. [Google Scholar] [CrossRef]
  31. Li, Y.T. Research on the Visual Detection and Localization Technology of Tea Harvesting Robot. Ph.D. Thesis, Zhejiang Sci-Tech University, Hangzhou, China, 2022. [Google Scholar]
  32. Lu, D.; Yi, J. The Significance and Implementation Path of Mechanized Picking of Famous Green Tea in China. China Tea 2018, 40, 1–4. [Google Scholar]
  33. Yang, F.; Yang, L.; Tian, Y.; Yang, Q. Recognition of the Tea Sprout Based on Color and Shape Features. Trans. Chin. Soc. Agric. Mach. 2009, 40, 119–123. [Google Scholar]
  34. Wang, J. Segmentation Algorithm of Tea Combined with the Color and Region Growing. J. Tea Sci. 2011, 31, 72–77. [Google Scholar] [CrossRef]
  35. Wu, X.; Tang, X.; Zhang, F.; Gu, J. Tea buds image identification based on lab color model and K-means clustering. J. Chin. Agric. Mech. 2015, 36, 161–164 + 179. [Google Scholar] [CrossRef]
  36. Sun, X. The Research of Tea Buds Detection and Leaf Diseases Image Recognitionn Based on Deep Learning. Master’s Thesis, Shandong Agricultural University, Taian, China, 2019. [Google Scholar]
  37. Guo, S.; Yoon, S.-C.; Li, L.; Wang, W.; Zhuang, H.; Wei, C.; Liu, Y.; Li, Y. Recognition and Positioning of Fresh Tea Buds Using YOLOv4-lighted + ICBAM Model and RGB-D Sensing. Agriculture 2023, 13, 518. [Google Scholar] [CrossRef]
  38. Lyu, J.; Fang, M.; Yao, Q.; Wu, C.; He, Y.; Bian, L.; Zhong, X. Detection model for tea buds based on region brightness adaptive correction. Trans. Chin. Soc. Agric. Eng. 2021, 37, 278–285. [Google Scholar] [CrossRef]
  39. Gong, T.; Wang, Z.L. A tea tip detection method suitable for tea pickers based on YOLOv4 network. In Proceedings of the 2021 3rd International Symposium on Robotics & Intelligent Manufacturing Technology (ISRIMT), Online, 25–26 September 2021. [Google Scholar]
  40. Chen, M. Recognition and Location of High-Quality Tea Buds Based on Computer Vision. Master Thesis, Qingdao University of Science and Technology, Qingdao, China, 2019. [Google Scholar]
  41. Xu, F.; Zhang, K.; Zhang, W.; Wang, R.; Wang, T.; Wan, S.; Liu, B.; Rao, Y. Identification and Localization Method of Tea Bud Leaf Picking Point Based on Improved YOLOv4 Algorithm. J. Fudan Univ. Nat. Sci. 2022, 61, 460–471. [Google Scholar] [CrossRef]
  42. Zhang, K. Three-Dimensional Localization of the Picking Point of Houkui Tea in the Natural Environment Based on YOLOv4. Master’s Thesis, Anhui Agricultural University, Hefei, China, 2022. [Google Scholar]
  43. Chen, Y.-T.; Chen, S.-F. Localizing plucking points of tea leaves using deep convolutional neural networks. Comput. Electron. Agric. 2020, 171, 105298. [Google Scholar] [CrossRef]
  44. Wang, T.; Zhang, K.; Zhang, W.; Wang, R.; Wan, S.; Rao, Y.; Jiang, Z.; Gu, L. Tea picking point detection and location based on Mask-RCNN. Inf. Process. Agric. 2023, 10, 267–275. [Google Scholar] [CrossRef]
  45. Yan, L.; Wu, K.; Lin, J.; Xu, X.; Zhang, J.; Zhao, X.; Tayor, J.; Chen, D. Identification and picking point positioning of tender tea shoots based on MR3P-TS model. Front. Plant Sci. 2022, 13, 962391. [Google Scholar] [CrossRef]
  46. Chunyu, Y.; Zhonghui, C.; Zhilin, L.; Ruixin, L.; Yuxin, L.; Hui, X.; Ping, L.; Benliang, X. Tea Sprout Picking Point Identification Based on Improved DeepLabV3+. Agriculture 2022, 12, 1594. [Google Scholar]
  47. Zhang, X. Research on Tea Recognition Method Based on Machine Vision Features for Intelligent Tea Picking Robot. Master’s Thesis, Shanghai Jiaotong University, Shanghai, China, 2020. [Google Scholar]
  48. Wang, F.; Cui, D.; Li, L. Target recognition and positioning algorithm of picking robot based on deep learning. Electron. Meas. Technol. 2021, 44, 162–167. [Google Scholar] [CrossRef]
  49. Yang, S.; Wang, Y.; Guo, H.; Wang, X.; Liu, N. Binocular camera multi-pose calibration method based on radial alignment constraint algorithm. J. Comput. Appl. 2018, 38, 2655–2659. [Google Scholar]
  50. Li, Y.; He, L.; Jia, J.; Lv, J.; Chen, J.; Qiao, X.; Wu, C. In-field tea shoot detection and 3D localization using an RGB-D camera. Comput. Electron. Agric. 2021, 185, 106149. [Google Scholar] [CrossRef]
  51. Chen, Z.; Chen, J.; Li, Y.; Gui, Z.; Yu, T. Tea Bud Detection and 3D Pose Estimation in the Field with a Depth Camera Based on Improved YOLOv5 and the Optimal Pose-Vertices Search Method. Agriculture 2023, 13, 1405. [Google Scholar] [CrossRef]
  52. Chen, D.; Han, W.; Zhou, X.; Wu, K.; Zhang, J. Tea Yield Prediction in Zhejiang Province Based on Adaboost BP Model. J. Tea Sci. 2021, 41, 564–576. [Google Scholar] [CrossRef]
  53. Wang, Y.; Chen, H.; Chen, J.; Wang, H.; Xing, Z.; Zhang, Z. Comparation of rice yield estimation model combining spectral index screening method and statistical regression algorithm. Trans. Chin. Soc. Agric. Eng. 2021, 37, 208–216. [Google Scholar]
  54. Hhuang, Z. Study on Wheat Yield Estimation Based on Deep Learning Ear Recognition. Master’s Thesis, Shandong Agricultural University, Taian, China, 2022. [Google Scholar]
  55. Li, T. Research on Recognition and Counting Method of Tea Buds Based on Deep Learning. Master’s Thesis, Anhui University, Hefei, China, 2021. [Google Scholar]
  56. Xu, H.; Ma, W.; Tan, Y.; Liu, X.; Zheng, Y.; Tian, Z. Yield estimation method for tea based on YOLOv5 deep learning. J. China Agric. Univ. 2022, 27, 213–220. [Google Scholar]
  57. Li, S. Application of Computer Vision in Tea Grade Testing. J. Agric. Mech. Res. 2019, 41, 219–222. [Google Scholar] [CrossRef]
  58. Borah, S.; Bhuyan, M. Quality indexing by machine vision during fermentation in black tea manufacturing. In Proceedings of the Sixth International Conference on Quality Control by Artificial Vision, Gatlinberg, TN, USA, 19–22 May 2003. [Google Scholar]
  59. Gao, D. Research on the Tea Sorting Based on Characteristic of Color and Shape. Master’s Thesis, University of Science and Technology of China, Hefei, China, 2016. [Google Scholar]
  60. Kim, S.; Kwak, J.-Y.; Ko, B. Automatic Classification Algorithm for Raw Materials using Mean Shift Clustering and Stepwise Region Merging in Color. J. Broadcast Eng. 2016, 21, 425–435. [Google Scholar] [CrossRef]
  61. Song, Y.; Xie, H.; Ning, J.; Zhang, Z. Grading Keemun black tea based on shape feature parameters of machine vision. Trans. Chin. Soc. Agric. Eng. 2018, 34, 279–286. [Google Scholar]
  62. Aboulwafa, M.M.; Youssef, F.S.; Gad, H.A.; Sarker, S.D.; Ashour, M.L. Authentication and Discrimination of Green Tea Samples Using UV-Visible, FTIR and HPLC Techniques Coupled with Chemometrics Analysis. J. Pharm. Biomed. Anal. 2018, 164, 653–658. [Google Scholar] [CrossRef]
  63. Chen, L.; Xu, B.; Zhao, C.; Duan, D.; Cao, Q.; Wang, F. Application of Multispectral Camera in Monitoring the Quality Parameters of Fresh Tea Leaves. Remote Sens. 2021, 13, 3719. [Google Scholar] [CrossRef]
  64. Wickramasinghe, N.; Ekanayake, E.M.S.L.B.; Wijedasa, M.A.C.S.; Wijesinghe, A.; Madhujith, T.; Ekanayake, M.P.; Godaliyadda, G.M.R.; Herath, V. Validation of Multispectral Imaging for The Detection of Sugar Adulteration in Black Tea. In Proceedings of the 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), Negambo, Sri Lanka, 11–13 August 2021; pp. 494–499. [Google Scholar]
  65. Huang, H.; Chen, X.; Han, Z.; Fan, Q.; Zhu, Y.; Hu, P. Tea Buds Grading Method Based on Multiscale Attention Mechanism and Knowledge Distillation. Trans. Chin. Soc. Agric. Mach. 2022, 53, 399–407 + 458. [Google Scholar]
  66. Cao, Q.; Yang, G.; Wang, F.; Chen, L.; Xu, B.; Zhao, C.; Duan, D.; Jiang, P.; Xu, Z.; Yang, H. Discrimination of tea plant variety using in-situ multispectral imaging system and multi-feature analysis. Comput. Electron. Agric. 2022, 202, 107360. [Google Scholar] [CrossRef]
  67. Zhang, J.; Li, Z. Intelligent Tea-Picking System Based on Active Computer Vision and Internet of Things. Secur. Commun. Netw. 2021, 2021, 5302783. [Google Scholar] [CrossRef]
  68. Lu, J.; Yang, Z.; Sun, Q.; Gao, Z.; Ma, W. A Machine Vision-Based Method for Tea Buds Segmentation and Picking Point Location Used on a Cloud Platform. Agronomy 2023, 13, 1537. [Google Scholar] [CrossRef]
  69. Villa-Henriksen, A.; Edwards, G.T.C.; Pesonen, L.A.; Green, O.; Sørensen, C.A.G. Internet of Things in arable farming: Implementation, applications, challenges and potential. Biosyst. Eng. 2020, 191, 60–84. [Google Scholar] [CrossRef]
Figure 1. Machine vision system.
Figure 1. Machine vision system.
Applsci 13 10744 g001
Figure 2. The principle of disease and pest detection using machine vision.
Figure 2. The principle of disease and pest detection using machine vision.
Applsci 13 10744 g002
Figure 3. Different pest control devices: (a) pest monitoring system (PMS); (b) targeted pest control system (TPCS).
Figure 3. Different pest control devices: (a) pest monitoring system (PMS); (b) targeted pest control system (TPCS).
Applsci 13 10744 g003
Figure 4. Application of machine vision in tea harvesting.
Figure 4. Application of machine vision in tea harvesting.
Applsci 13 10744 g004
Figure 5. Tea bud images with different light regimes: (a) backlight; (b) direct sunlight; (c) occlusion.
Figure 5. Tea bud images with different light regimes: (a) backlight; (b) direct sunlight; (c) occlusion.
Applsci 13 10744 g005
Table 2. Comparison of three different methods.
Table 2. Comparison of three different methods.
Camera TypeRGB Binocular CameraStructured LightTOF
Mainstream BrandsDajiang, OniKinect V1, RealsenseKinect V2, V3
Working principlePassive: triangulation calculation based on mathing results for RGB image featuresActive: actively projecting known encoded light sources to improve matching feature performanceActive: direct measurement based on the flight time of infrared lasers
Measuring range0.1 m~20 m (the farther the distance, the lower the accuracy)0.1 m~20 m (the farther the distance, the lower the accuracy)0.1 m~20 m (the farther the distance, the lower the accuracy)
Measurement accuracy0.01 mm~1 mm0.01 mm~1 mmUp to centimeter level
Environmental limitationsDue to significant changes in light intensity and object texture, it cannot be used at nightThe indoor effect is good, but it will be affected to some extent under strong outdoor lightNot affected by changes in lighting and object texture, but will be affected by reflective objects
Resolution ratioUp to 2K resolutionUp to 1280 × 720Usually 512 × 424
Frame rate1 to 90 FPS1 to 30 FPSUp to hundreds of FPS
Software complexityHigherMediumHigher
ConsumptionLowerMediumHigh power consumption
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Ma, W.; Lu, J.; Tian, Z.; Peng, K. The Application Status and Trends of Machine Vision in Tea Production. Appl. Sci. 2023, 13, 10744. https://doi.org/10.3390/app131910744

AMA Style

Yang Z, Ma W, Lu J, Tian Z, Peng K. The Application Status and Trends of Machine Vision in Tea Production. Applied Sciences. 2023; 13(19):10744. https://doi.org/10.3390/app131910744

Chicago/Turabian Style

Yang, Zhiming, Wei Ma, Jinzhu Lu, Zhiwei Tian, and Kaiqian Peng. 2023. "The Application Status and Trends of Machine Vision in Tea Production" Applied Sciences 13, no. 19: 10744. https://doi.org/10.3390/app131910744

APA Style

Yang, Z., Ma, W., Lu, J., Tian, Z., & Peng, K. (2023). The Application Status and Trends of Machine Vision in Tea Production. Applied Sciences, 13(19), 10744. https://doi.org/10.3390/app131910744

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop