Next Article in Journal
Predicting Spruce Taiga Distribution in Northeast Asia Using Species Distribution Models: Glacial Refugia, Mid-Holocene Expansion and Future Predictions for Global Warming
Next Article in Special Issue
Seed Distribution and Phenotypic Variation in Different Layers of a Cunninghamia Lanceolata Seed Orchard
Previous Article in Journal
The Effects of Stand Density Control on Carbon Cycle in Chamaecyparis obtusa (Siebold and Zucc.) Endl. Forests
Previous Article in Special Issue
The Species Richness-Environment Relationship for Cherries (Prunus subgenus Cerasus) across the Northern Hemisphere
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on 3D Phenotypic Reconstruction and Micro-Defect Detection of Green Plum Based on Multi-View Images

1
Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
2
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(2), 218; https://doi.org/10.3390/f14020218
Submission received: 1 December 2022 / Revised: 5 January 2023 / Accepted: 22 January 2023 / Published: 23 January 2023

Abstract

:
Rain spots on green plum are superficial micro-defects. Defect detection based on a two-dimensional image is easily influenced by factors such as placement position and light and is prone to misjudgment and omission, which are the main problems affecting the accuracy of defect screening of green plum. In this paper, using computer vision technology, an improved structure from motion (SFM) and patch-based multi-view stereo (PMVS) algorithm based on similar graph clustering and graph matching is proposed to perform three-dimensional sparse and dense reconstruction of green plums. The results show that, compared with the traditional algorithm, the running time of this algorithm is lower, at only 26.55 s, and the mean values of camera optical center error and pose error are 0.019 and 0.631, respectively. This method obtains a higher reconstruction accuracy to meet the subsequent plum micro-defect detection requirements. Aiming at the dense point cloud model of green plums, through point cloud preprocessing, the improved adaptive segmentation algorithm based on the Lab color space realizes the effective segmentation of the point cloud of green plum micro-defects. The experimental results show that the average running time of the improved adaptive segmentation algorithm is 2.56 s, showing a faster segmentation speed and better effect than the traditional K-means and K-means++ algorithms. After clustering the micro-defect point cloud, the micro-defect information of green plums was extracted on the basis of random sample consensus (RANSAC) plane fitting, which provides a theoretical model for further improving the accuracy of sorting the appearance quality of green plums.

1. Introduction

Detecting the surface defects of small-sized fruits such as green plums, jujubes, and nectarines on the basis of two-dimensional images is easily influenced by factors such as image acquisition angle, light change, and lens distortion at present. It is sometimes difficult to distinguish superficial micro-defects (rain spots, scars, pits, etc.), resulting in misjudgment and omission when sorting fruit appearance quality, which is the main problem affecting the accuracy of fruit defect screening. Janos et al. [1] took apples as the research object, applied the particle filter algorithm for post-image processing, and constructed a pixel-based PLS-DA model. The experimental results showed that the intact recognition rate of apples was 100%, and the damage recognition rate was 98%. LÜ Q, Wang J [2,3], and other scholars have carried out nondestructive testing based on two-dimensional images for insect eyes and fine damage to kiwifruit and jujube. The experimental results showed that the recognition rate of intact jujube samples was 98%, while that of pest samples was only 94%. The recognition error of fine scratches on kiwifruit was 14.5%, and the recognition rate of defective samples was much lower than that of intact samples. The three-dimensional reconstruction technology can display the three-dimensional structure of small fruit in detail. For example, Wang et al. [4] used RGB-D technology based on Lab color information to segment mango using the Otsu segmentation method, and estimated mango size through depth information. Nguyen et al. [5] used RGB-D to collect 3D point cloud data of apples, segmented them using the Euclidean clustering method, and transformed them into two-dimensional images to realize the recognition of overlapping fruits. Wu [6] used a 3D laser scanner to collect the point cloud information of four sides of jujube. After preprocessing, such as point cloud filtering and outlier elimination, the point cloud model of jujube was established. The volume and surface area of jujube were calculated using the convex hull method, triangular grid method, and projection slice method. When using the rich information contained in the three-dimensional dense point cloud to detect the micro-defects on the surface of small fruit, the errors caused by the 2D image acquisition angle, light change, and lens distortion can be avoided as far as possible, the three-dimensional information of small fruit can be displayed in detail, and the detailed information of the fruit surface can be extracted through point cloud segmentation. However, there is little research on the detection of micro-defects on small fruit surfaces based on the 3D point cloud reconstruction model at present. We tried to propose a feasible theoretical method to build a 3D model and extract micro-defect information of a single green plum based on multi-angle images, which provided a research basis for further improving the accuracy of small fruit appearance quality sorting.
At present, 3D reconstruction technology has been widely used in the fields of dynamic face verification, urban scene classification, forest environment detection, 3D measurement of objects, etc. [7,8,9,10]. The SFM (structure from motion) algorithm is a three-dimensional reconstruction algorithm that determines the spatial and geometric relationship of the target by the movement of the camera shooting position. It is a very important algorithm in the field of 3D reconstruction [11,12,13]. Gomez Monoso F et al. [14] used SFM technology to accomplish 3D reconstruction, which overcame the overlapping errors under 2D image recognition and achieved the highest classification accuracy of 98% for actual pedestrians, billboards, printed advertisements, and posts. Gobbi B et al. [15] used SFM technology to model forests in three dimensions and derive ecological indicators describing forest conditions. Li S et al. [16] proposed a fast and integrated 3D reconstruction method for small static objects using the laser scanning and SFM algorithm. Experimental results showed that this method improves the speed, accuracy, completeness, and visual effect of 3D reconstruction of small objects. Yang Z et al. [17] obtained the key features of crop area using the algorithm based on vegetation index and scale-invariant feature transform (SIFT) matching. The comparison between the obtained phenotypic parameters and the manual measurement results showed that the root-mean-square error (RMSE) values of the plant height, leaf number, leaf length, and leaf angle were 1.82, 1.57, 2.43, and 4.7, respectively. The measurement accuracy of each index was greater than 80%. This study provides a convenient and fast method for extracting three-dimensional phenotypic parameters of crops. At present, institutions and scholars have carried out relevant research on 3D defect feature extraction, segmentation, and detection [18]. For example, Borsu V et al. [19] used feature extraction and classification techniques to accurately locate the position and deformation type of defects in the three-dimensional grid of the car body surface and marked the deformation position. Experiments showed that the reliable marking of the deformation area had sufficient accuracy. Tang P et al. [20] proposed three surface flatness defect detection algorithms and applied the algorithms to two different scanners based on the time of flight (TOF) and amplitude modulation continuous wave (AMCW) principles. The results showed that the scanning distance and angular resolution greatly affect the defect detection accuracy. Marani R et al. [21] proposed a three-dimensional reconstruction system for high-precision surface defect detection, which identified surface defects of small objects such as coins and drills, and solved the problem of occlusion. Experiments showed that the system has high detection accuracy and can reach a resolution of 15 μm. The above studies show that the detection method is accurate and effective, but there are few studies on fruit surface micro-defect detection based on a 3D point cloud reconstruction model.
The above research shows that the detection method is accurate and effective, but the research on micro-defect detection of small fruit surfaces based on the 3D point cloud reconstruction model is less. To solve the misjudgment and omission of micro-defect detection on the surface of small-sized fruits, such as green plums, caused by two-dimensional image acquisition angle, light change, and lens distortion, the accuracy of sorting the appearance quality of green plums was improved. In this paper, based on the multi-view two-dimensional image sequence of green plums, an improved SFM and PMVS algorithm based on similar graph clustering and graph matching is proposed to reconstruct the three-dimensional sparse and dense reconstruction of fresh green plums. High reconstruction accuracy is obtained to meet the requirements of subsequent green plum micro-defect (rain spot) detection. On the basis of a dense point cloud model of green plum, an improved adaptive segmentation algorithm based on Lab space is proposed to achieve the effective segmentation of micro-defects on green plum. This method greatly reduces the running time of the algorithm and obtains good segmentation results. Based on random sample consensus (RANSAC), the rain spot defect information of greengage is extracted, which provides a theoretical model for further improving the accuracy of the appearance quality sorting of green plum.

2. Materials and Methods

2.1. Test Sample

In May 2022, 100 samples of green plums were purchased online from Yunnan Province, China. The surface defects of green plums were diverse. According to the severity, green plums were divided into five categories: rot (the highest severity), crack, scar, rain spot, and intact. Among them, rot, crack, and scar were the most obvious, and the rain spot defects had the characteristics of being dispersed, shallow, and small, as shown in Figure 1. A typical rain spot defect of green plums was selected for multi-view image sequence acquisition, which made the experimental samples representative.

2.2. Experimental Platform

The research adopted a self-built green plum image acquisition device platform, as shown in Figure 2; the test device consisted of an industrial camera, rotary table, light source, black box, etc. The camera was an MV-EM510 C industrial camera with a 4.0 mm focal length lens. The pixel size of the camera was 3.45 μm × 2.2 μm, the frame rate was 15 frames per second, and the exposure time was 30–5,000,000 μs. The camera was fixed to the dark box frame through a clamp, horizontally facing the center of the rotary table, and it was about 150 mm away from the measured object. The light source used two LED lights. The rotary table speed was 60 s per circle, and the rotation angle was 360°. In the center of the rotary table, a steel needle, painted white, with a length of 50 mm, was fixed. The system was connected to a computer via Ethernet.

2.3. Imaging

Zhang Zhengyou’s calibration method [22] was used to calibrate the camera. This method has the characteristics of simple operation and high precision. When collecting images, uniform illumination and sufficient light should be ensured. Through the preliminary test, it was found that when collecting the information on a single green plum, due to the monochromatic and limited shape features of the green plums, the number of feature points detected was scarce, and the effect of the SFM algorithm reconstruction in the later period was extremely poor with potential for failure. By adding a feature-rich calibration plate, the reconstruction effect was the best. Operation involved the use of a soft trigger, along with timer control, which was placed in the black box whiteboard to reduce the interference of complex environments. The distance between the lens and the green plums was 150 mm, and the height of the main optical axis was 200 mm. During the image acquisition, the fruit pedicle of the green plum sample was first inserted into the steel needle tip, the rotation was started, the light source and the focal length were adjusted, and when the sample was clearly displayed in front of the camera, it was started to work. The acquisition interval angle was set to 10°, the angle range was 0–360°, and the speed of rotary table was 30 s/r. The external MCU was used to send a pulse signal with a frequency of 1.2 Hz to trigger the camera to collect images, and the current frame data was saved in the camera buffer. Figure 3 shows a set of 36 rain spot green plum images taken from multiple angles. Each image was 3840 × 2160 pixels, and the image storage format was JPG.

2.4. Computing Environment

The hardware used was as follows: Win10 64 bit, i7 8750H 2.20 GHz CPU, 32 GB memory, NVIDIA GTX1070 GPU, and 8 G graphics. The software used was as follows: the OpenMVG platform was used to build the SFM framework, Ncut was used to realize the clustering of similar images, and Ceres Slover library was used to optimize the bundle adjustment. The point cloud processing and segmentation were completed by the large cross-platform open-source C++ point cloud library (PCL) under Microsoft Visual Studio and Robot Operating System. The effects were manually visualized with MeshLab and Cloud compare software.

3. The 3D Point Cloud Information Construction of Plum

The multi-view images of green plum cause an inconspicuous disparity between images and high information repeatability because of the small scene. The coarse matching of features in traditional SFM reconstruction can cause a large number of repeated matches. It is necessary to eliminate and optimize the mismatching points, resulting in a long running time. Aiming at the low-parallax characteristics of the image dataset, this paper conducted similar graph clustering using the Fisher vector, as well as reduced some repetitive and unnecessary matching processes via parallel computing of intraclass matching and interclass fusion. On this basis, the dense reconstruction method was used to obtain the three-dimensional point cloud model of green plum. In this paper, the traditional incremental SFM 3D point cloud reconstruction framework was improved, and the improved 3D reconstruction algorithm framework is shown in Figure 4.
After the feature points were detected using the SIFT operator, similarity graph clustering and graph matching were used. Firstly, the similarity was determined by Fisher vector coding to classify the images; then, the matching graph was generated in each image class, and the image sequence trajectory was constructed. According to the matching graph results, the appropriate initialization image pairs were selected, and the global optimal camera pose parameters were found under the iterative optimization of distributed bundle adjustment. Then, the feature points were triangulated to obtain the sparse model of the three-dimensional point cloud, and the PMVS [23,24,25] algorithm was used for dense reconstruction, thereby improving the efficiency of the three-dimensional reconstruction process of plum from coarse to fine, as well as reducing the running time of the algorithm and the memory occupied by the computer during operation.

3.1. The 3D Reconstruction of Plum Based on Similarity Graph Clustering

3.1.1. Transformer Module

Using computer vision technology to reconstruct the three-dimensional reconstruction of green plums, it is necessary to obtain green plum images with different angles and continuous sequences. The key points on each image are called feature points. Image feature point detection is one of the cores of image processing. In this paper, SIFT was used to detect the feature points of the image. One green plum image was represented as a 128-dimensional feature vector set by the SIFT algorithm. The feature description algorithm mainly includes scale space pole detection, key point precise positioning, key point direction determination, and feature vector generation.
After using the SIFT operator to detect feature points and calculate feature descriptions, this paper used similarity judgment to divide the atlas before constructing matching. Using the distance between Fisher vectors to judge similarity can obtain higher efficiency and better results [26]. The set of SIFT features X = { x 1 , x 2 , x T } was extracted from the image, the number of feature points extracted from different pictures was not the same, and the length of the feature vectors was different. It is necessary to raise the dimensions to form a 128-dimensional SIFT feature vector and eliminate the impact of light changes through normalization. Therefore, the similarity judgment measure of key points in two images can be carried out in a unified form. Since the feature points are independent of each other and obey the Gaussian distribution, the likelihood function can be defined by Equations (1)–(3).
L ( X | λ ) = log P ( X | λ ) = t = 1 T log P ( x t | λ )
Here, P ( x t | λ ) is a weighted mixture of K Gaussian distributions.
P ( x t | λ ) = i = 1 K w i P i ( x t | λ )
The partial derivative of the likelihood function is calculated as a function of the weight w i , mean μ i , and variance σ i , and then normalized to the Fisher vector.
Φ X = [ f w i 1 / 2 L ( X | λ ) w i , f μ i d 1 / 2 L ( X | λ ) μ i d , f σ i d 1 / 2 L ( X | λ ) σ i d ]
A smaller Fisher distance results in a higher similarity of the two images. Accordingly, a similarity graph C i = ( v i , ε i ) is constructed, where ε i represents the weight of the connection edge between the image pairs, which is obtained by the difference between the global extremum of Fisher distance and the distance of the image pair v i , so as to obtain the positive correlation between the weight size and the similarity size [27]. By using the multi-cluster image clustering method to construct similar image clustering [28], the bad matching pairs can be detected by loop consistency to avoid the cumulative error caused by the wrong matching hidden in the long path [29].
{ C k | C k = ( v k , ε k ) }
When constructing an image cluster using Equation (4), two constraints need to be satisfied:
(a)
The number of images | v i | in each image cluster must not be higher than the upper bound Δ u p = 12 ;
(b)
Associated images between image clusters must have an overlap rate ε k 0.7 .

3.1.2. Incremental SFM Sparse Reconstruction

The incremental motion recovery structure was obtained for each image class and its matching map based on similarity, which mainly included (1) calculating the fundamental matrix, (2) merging tracks, (3) finding the initial image pair, (4) calculating the camera pose and triangulating the coordinates, and (5) minimizing the reprojection error. The computation was repeated using distributed beam method parity to obtain more 3D points and globally better camera poses t, R. The loop was iterated to achieve an incremental motion recovery structure until all images in the trajectory were processed. The minimum spanning tree was used to fuse t and R corresponding to different image classes so as to obtain the global sparse reconstruction results of green plum.

3.1.3. Dense Point Cloud Reconstruction

In order to meet the subsequent analysis of the bruise point cloud data and the requirements of segmenting defects, the information needs to be thickened; the 3D thickened point cloud model was obtained using the face slice-based PMVS algorithm. Each feature point in the sparse point cloud was projected onto the source image, and then the geometric information of that feature point was diffused to the surrounding area; the points that were repeatedly diffused were screened out by consistency comparison.

3.2. Green Plum Point Cloud Reconstruction Results and Analysis

3.2.1. Comparison of Feature Point Matching Results with Different Thresholds

SIFT was selected for feature point detection. The feature points were matched according to the Euclidean distance of the descriptor between the two feature points. The matching point pairs with the shortest distance and the ratio of the nearest distance to the second nearest distance less than the set threshold H were selected. The number of matches should be moderate. A large number of matches would result in a longer computation time and larger reconstruction error, while a small number would result in poor model accuracy. The error rate of matching should be as small as possible so that the error rate of matching pairs with different threshold values H can be compared and the best threshold value H can be selected. By comparison, it was found from Table 1 that a threshold value of 0.7 gave the best results for all indicators combined.

3.2.2. Comparison of the Results of Different SFM Reconstruction Algorithms

In order to verify the improved algorithm proposed in this paper, the green plum image sequences (36 images) acquired above were input into Bundler [30], Visual SFM [31,32], and the improved algorithm proposed in this paper, and the total running time T t o t a l of the algorithm, the mean value of camera range error R E ¯ , and the mean value of camera pose error P E ¯ were taken as the experimental indices for the evaluation of efficiency and accuracy. According to Section 3.1.1, Δ u p = 12 , ε k 0.7 , the experimental results are shown in Table 2. From the experimental results, it can be seen that the algorithm proposed in this paper had the lowest running time and the best results in terms of P E ¯ and R E ¯ .

3.2.3. Comparison of the Results of Different SFM Reconstruction Algorithms

Using the modified SFM sparse reconstruction, the reconstruction result was obtained, as shown in Figure 5b. Using the PMVS algorithm, the results of the dense reconstruction were as shown in Figure 5c. A featurerich calibration plate was added to make sure that the reconstruction effect was the best. Comparison of sparse and dense reconstructed models of the green plum point cloud is shown in Table 3.

4. Adaptive Segmentation of Green Plum Micro-Defect (Rain Spot) Detection Based on Dense Point Cloud

In this paper, the point cloud was preprocessed to remove the background and other interference areas, an improved adaptive segmentation method based on the Lab color space was used to achieve effective segmentation and clustering of the green plum rain spot point cloud, and the defect information of the green plum rain spot was extracted on the basis of RANSAC plane fitting. The details are shown in Figure 6.

4.1. Point Cloud Preprocessing

In order to extract the green plum point cloud, the point cloud was first converted from the RGB color space to the Lab color space, which is more in line with the human visual senses. Due to the influence of the experimental light environment, the color intensity value of the reconstructed point cloud fluctuated greatly. The distributions of the background point cloud and the green plum point cloud in RGB color space is shown in Figure 7a. The distributions of the background point cloud and the green plum point cloud in Lab color space is shown in Figure 7b. There was a large overlap area between the two. It was impossible to accurately distinguish the two using threshold segmentation based on color features alone.
The support vector machine (SVM) was used as the binary classifier, and the RBF radial basis function was selected as the kernel function to deal with the nonlinear situation, which has appropriate hyperplane parameters that can be adjusted more simply and quickly than the polynomial kernel function. The normalized data matrix was used as the input data of the SVM model, and the point categories were used as the output data of the SVM model. The training and test results are shown in Table 4.
In the training set, the classification accuracy of SVM in the RGB color space was 99.7%, while the classification accuracy in the Lab color space was 98.7%; better results were obtained in the RGB color space. However, in the test set, the classification accuracy in the RGB space was 99.16%, and the classification accuracy in the Lab color space was 99.4%. The results show that the conversion of RGB to Lab color space could obtain a wider color gamut and more color performance. From the test results, it can be seen that the misclassification points mainly focused on the misclassification of green plum points into background points. In view of the rich color changes on the surface of green plum, the Lab color space had a higher generalization ability than RGB. The results before and after background segmentation are shown in Figure 8 a,b.

4.2. Rain Spot Defect Information Extraction

An improved adaptive segmentation algorithm based on the Lab color space is proposed to effectively segment the point clouds of rain spot defects of green plum. In order to avoid the initial clustering center randomly selected by the K-means algorithm being too close, this paper iteratively obtained the global initial clustering center by improving the maximum and minimum distance method. Firstly, two initial data points with the farthest distance were selected; however, the traditional maximum and minimum distance method needed to traverse the Euclidean distance between each point in the dataset X ( n ) , and then the two points with the largest distance were selected as the initial clustering center points [33,34]. In this paper, two methods with time complexity O ( N 2 ) were used to determine the largest initial point.
Through the global initial clustering center determined by the Davies–Bouldin index (DBI), the pixels on the surface of the green plum were classified to realize the segmentation of the rain spot defect. The DBI uses interclass distance and intraclass dispersion, i.e., the ratio of the sum of intra-cluster distance to the sum of inter-cluster distance [35,36].
(1)
Plane fitting based on RANSAC
RANSAC was used to perform plane fitting on each cluster class clustered by defects of rain spots, and the number of iterations k and error tolerance range d i s e r r o r were set. The number of iterations is responsible for ensuring that the RANSAC algorithm can perform enough epochs to find the best model estimation, and the error tolerance range indicates that the points are allowed to be distributed within the range of distances from the prediction model [37,38,39,40].
Two parameters, iteration number k and error tolerance range d i s e r r o r , are determined by control variables. According to the comprehensive analysis of segmentation accuracy, completeness, and segmentation quality, the optimal error tolerance threshold of the model was 0.007. Under the condition of a threshold of 0.007, the iteration rounds were compressed, and it was determined that the parameters stabilized when the iteration rounds reached eight times. After iterative optimization, RANSAC was applied to perform plane fitting on each cluster of rain spots, as shown in Figure 9a,b and the point clouds corresponding to the cluster were projected onto the plane along the normal direction, as shown in Figure 9c,d.
The point clouds were decentered and rotated to the XOY plane, as shown in Figure 10a,b. The experimental results show that the average error of the Z-axis coordinate of the point cloud rotated to the XOY plane was 0.0152, which meets the actual needs.
(2)
Green Plum Rain Spot Defect Roundness and Area
The shape of rain spot defects was usually a small circular patch, while cracks and damage defects were irregular. In order to prevent other types of defects from being extracted, screening is needed. In this paper, the B-spline interpolation method was used to fit the polygon. According to the definition of “roundness”, a positive circle is 1; the closer the contour value is to 1, the closer the contour is to the circle. The curve contour was transformed into a sampling point, the sampling density was set, and the distance between the sampling points was calculated to approximate the length of the curve. The triangular area of all sampling points was calculated using the Helen formula and combined to obtain the total curve contour area.

4.3. Extraction Results and Analysis of Plum Rain Spot Defect Information

(1)
Adaptive segmentation results and analysis of plum rain spots
The improved adaptive segmentation algorithm was compared with the traditional K−means and K−means++ [41,42] algorithms. The average running time of the improved adaptive segmentation algorithm was 2.56 s, while the average running time of the traditional K−means and K−means++ algorithm was 6.91 s and 4.06 s. The segmentation comparison result is shown in Figure 11, where the red area denotes the segmented rain spot defect. The algorithm effectively improved the adaptability of segmentation, reduced the number of subsequent iterations of the K−means algorithm, and greatly reduced the computational complexity of the algorithm. Therefore, the improved adaptive segmentation algorithm showed faster segmentation speed and better results. The clustering of rain spot defect point clouds by Euclidean clustering provided a basis for extracting the information of rain spot defects. Different colors represented different color channels, and the red box area was the part of defect segmentation error. The clustering result is shown in Figure 12.
(2)
Comparison of different bar curve fitting results
The point cloud contour points were extracted, and the polygon was fitted using the B-spline interpolation method. The micro-defect fitting results of different curve control points and curve orders were compared, as shown in Figure 13. It can be seen that when the number of curve control points was 10 and the order of the spline curve was 3, the fitted curve was best.
(3)
Comparison of green plum rain spot defect area calculation results
After segmentation, extraction, and fitting of three-dimensional rain spot defects, the triangular area of all sampling points was calculated in turn, and the total curve contour area was obtained by merging. During the experiment, the same sample parameter values were measured three times; the mean value was taken as the final manual measurement result of the sample and compared with the surface area obtained by the three-dimensional rain spot defect segmentation and extraction calculation. The experimental results show that with the increase in sampling point density, the relative error gradually decreased. When the sampling density was 80, it reached the optimal value, and the relative error did not change with the subsequent increase in sampling density. The results are shown in Table 5.

5. Conclusions

(1) Aiming at the point cloud reconstruction of green plums, a multi-view two-dimensional image sequence of green plums was obtained using the established image acquisition device platform. An improved in-motion recovery SFM and PMVS algorithm based on similar graph clustering and graph matching was proposed to reconstruct the three-dimensional sparse and dense reconstruction of the green plums. Compared with the traditional algorithm, the running time of the algorithm was lower, at only 26.55 s, and the camera optical center error mean and pose error mean were also the smallest, at 0.019 and 0.631, respectively. A high reconstruction accuracy was obtained to meet the subsequent green plum micro-defect (rain spot) detection requirements.
(2) For the detection of green plum micro-defects (rain spots), an improved adaptive segmentation algorithm based on the Lab color space was proposed for the dense point cloud model of green plums to realize the effective segmentation of green plum micro-defects (rain spots). The experimental results show that the average running time of the improved adaptive segmentation algorithm was 2.56 s, while the average running time of the traditional K-means and K-means++ algorithm was 6.91 s and 4.06 s. It represented a faster segmentation speed and better effect than the traditional K-means and K-means++ algorithms. After the segmentation and clustering of the micro-defect point cloud, the micro-defect (rain spot) information of green plums was extracted on the basis of random sampling consistency plane fitting, thus providing a theoretical model for further improving the accuracy of appearance quality sorting of green plums.
(3) Shortcomings: misjudgment of micro-defect detection on the surface of small fruits such as green plum caused by 2D image acquisition angle, light change, lens distortion, etc. In view of the above problems, we tried to propose a feasible theoretical method to build a 3D model and extract micro-defect information of a single green plum based on multi-angle images, and the model performance basically met the requirements, but it took a long time. Based on the above research foundation, the reconstruction algorithm, image processing, and other aspects will be optimized to further reduce the time under the condition of meeting the requirements of detection accuracy. It is also possible to carry out 3D reconstruction and micro-defect detection of the green plum sample group so as to further improve the detection efficiency and provide reference for the actual sampling application.

Author Contributions

Conceptualization, X.Z. and Y.L.; data curation, X.Z.; formal analysis, Z.Z.; funding acquisition, Y.L.; investigation, Y.L. and B.G.; methodology, X.Z.; project administration, Y.L.; resources, Y.L.; software, X.Z., L.H. and Z.Z.; supervision, L.H.; validation, L.H.; visualization, X.Z.; writing—original draft, X.Z.; writing—review & editing, Y.L. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jiangsu agricultural science and technology innovation fund project (Project#: CX(18)3071). LIU YING. Research on key technologies of intelligent sorting for green plum; 2020 Jiangsu graduate research and innovation plan (Project#: KYCX20_0882): Research on green plum defect sorting system based on artificial intelligence. LIU YANG.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to extend their sincere gratitude for the technical support from the Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Keresztes, J.C.; Goodarzi, M.; Saeys, W. Real-time pixel based early apple bruise detection using short wave infrared hyperspectral imaging in combination with calibration and glare correction techniques. Food Control 2016, 66, 215–226. [Google Scholar] [CrossRef]
  2. Lü, Q.; Tang, M. Detection of hidden bruise on kiwi fruit using hyperspectral imaging and parallelepiped classification. Procedia Environ. Sci. 2012, 11, 1172–1179. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, J.; Nakano, K.; Ohashi, S. Detection of external insect infestations in jujube fruit using hyperspectral reflectance imaging. Biosyst. Eng. 2011, 108, 345–351. [Google Scholar] [CrossRef]
  4. Weilin, W.; Changying, L. Size Estimation of Sweet Onions Using Consumer-grade RGB-depth Sensor. J. Food Eng. 2014, 142, 231–234. [Google Scholar]
  5. Nguyen, T.T.; Vandevoorde, K.; Wouters, N.; Kayacan, E.; De Baerdemaeker, J.G.; Saeys, W. Detection of Red and Bicoloured Apples on Tree with an RGB-D Camera. Biosyst. Eng. 2016, 146, 156–159. [Google Scholar] [CrossRef]
  6. Wu, M.; Luo, H.; Li, C.; Yi, X.; Yousaf, K.; Soomro, S.A.; Ji, F.; Chen, K. Automatic measurement method of volume and surface area of jujube based on laser point cloud. Int. Agric. Eng. 2019, 28, 261–268. [Google Scholar]
  7. Li, J.B.; Hu, Z.W.; Xu, Z.D.; Guo, Y.Q. Three-dimensional dynamic analysis of ancient buildings with novel high damping isolation trenches. J. Vib. Control. 2022, 28, 2409–2420. [Google Scholar] [CrossRef]
  8. Li, Q.; Yuan, P.; Lin, Y.; Tong, Y.; Liu, X. Pointwise Classification of Mobile Laser Scanning Point Clouds of Urban Scenes Using Raw Data; Nanjing Forestry University, College of Mechanical and Electronic Engineering: Nanjing, China, 2021; p. 024523. [Google Scholar]
  9. Li, Z.; Zou, H.; Sun, X.; Zhu, T.; Ni, C. 3D Expression-Invariant Face Verification Based on Transfer Learning and Siamese Network for Small Sample Size. Electronics 2021, 10, 2128. [Google Scholar] [CrossRef]
  10. Che, J.; Sun, Y.; Jin, X.; Chen, Y. 3D Measurement of Discontinuous Objects with Optimized Dual-frequency Grating Profilometry. Meas. Sci. Rev. 2021, 21, 197–204. [Google Scholar] [CrossRef]
  11. Jing, R.; Gong, Z.; Zhao, W.; Pu, R.; Deng, L. Above-bottom biomass retrieval of aquatic plants with regression models and SfM data acquired by a UAV platform-a case study in Wild Duck Lake Wetland, Beijing, China. ISPRS J. Photogramm. Remote Sens. 2017, 134, 122–134. [Google Scholar] [CrossRef]
  12. Jordi, G.; Ricardo, S.; Joan, R.; Alexandre, E.; Eduard, G. Fuji-SfM dataset: A collection of annotated images and point clouds for Fuji apple detection and location using structure-from-motion photogrammetry. Data Brief 2020, 30, 105591. [Google Scholar]
  13. Schonberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
  14. Gomez-Donoso, F.; Castano-Amoros, J.; Escalona, F.; Cazorla, M. Three-dimensional reconstruction using SFM for actual pedestrian classification. Expert Syst. Appl. 2023, 213, 119006. [Google Scholar] [CrossRef]
  15. Gobbi, B.; Van, R.A.; Gasparri, N.I.; Vanacker, V. Forest degradation in the Dry Chaco: A detection based on 3D canopy reconstruction from UAV-SfM techniques. For. Ecol. Manag. 2022, 526, 120554. [Google Scholar] [CrossRef]
  16. Li, S.; He, Y.; Li, Q.; Chen, M. Using Laser Measuring and SFM Algorithm for Fast 3D Reconstruction of Objects. J. Russ. Laser Res. 2018, 39, 591–599. [Google Scholar] [CrossRef]
  17. Yang, Z.; Han, Y. A Low-Cost 3D Phenotype Measurement Method of Leafy Vegetables Using Video Recordings from Smartphones. Sensors 2020, 20, 6068. [Google Scholar] [CrossRef]
  18. Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
  19. Borsu, V.; Yogeswaran, A.; Payeur, P. Automated surface deformations detection and marking on automotive body panels. In Proceedings of the 2010 IEEE International Conference on Automation Science and Engineering, Toronto, ON, Canada, 21–24 August 2010; pp. 551–556. [Google Scholar]
  20. Tang, P.; Akinci, B.; Huber, D. Characterization of three algorithms for detecting surface flatness defects from dense point clouds [C]//Three-Dimensional Imaging Metrology. Int. Soc. Opt. Photonics 2009, 7239, 197–208. [Google Scholar]
  21. Marani, R.; Roselli, G.; Nitti, M.; Cicirelli, G.; D’Orazio, T.; Stella, E. A 3D vision system for high resolution surface reconstruction. In Proceedings of the 2013 Seventh International Conference on Sensing Technology (ICST), Wellington, New Zealand, 3–5 December 2013; pp. 157–162. [Google Scholar]
  22. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  23. Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multi-View Stereopsis. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; 32, pp. 1362–1376. [Google Scholar]
  24. Saravan, N.; Siddabattuni, V.N.S.K.; Ramachandran, K.I. A comparative study on classification of features by SVM and PMVS extracted using Morlet wavelet for fault diagnosis of spur bevel gear box. Expert Syst. Appl. 2008, 35, 1351–1366. [Google Scholar] [CrossRef]
  25. Wang, A.; An, N.; Zhao, Y.; Iwahori, Y.; Kang, R. 3D Reconstruction of Remote Sensing Image Using Region Growing Combining with CMVS-PMVS. Int. J. Multimed. Ubiquitous Eng. 2016, 11, 29–36. [Google Scholar] [CrossRef]
  26. Scharstein, D.; Szeliski, R. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  27. Florent, P.; Jorge, S.; Thomas, M. Improving the Fisher Kernel for Large-Scale Image Classification. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010. [Google Scholar]
  28. Zhu, S.; Shen, T.; Zhou, L.; Zhang, R.; Wang, J.; Fang, T.; Quan, L. Parallel Structure from Motion from Local Increment to Global Averaging. In Proceedings of the ECCV, Tel Aviv, Israel, 23–27 October 2017. [Google Scholar]
  29. Shen, T.; Zhu, S.; Fang, T.; Zhang, R.; Quan, L. Graph-based consistent matching for structure-from-motion. In Proceedings of the ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  30. Snavely, N.; Seitzs, M.; Szeliski, R. Modeling Worldfrom Internet Photo Collections; Kluwer Academic Publishers: Dordrecht, Netherlands, 2008. [Google Scholar]
  31. Johannes, L.S.; Enliang, Z.; Jan-Michael, F.; Pollefeys, M. Pixelwise view selection for unstructured multi-view stereo. In Proceedings of the ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; pp. 501–518. [Google Scholar]
  32. Moulon, P.; Monasse, P.; Marlet, R. Adaptive Structure from Motion witha Contrario Model Estimation. In Proceedings of the Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 257–270. [Google Scholar]
  33. Wang, F.F.; Li, Q.; Zhang, M.J. Research on improvement of k-means clustering Algorithm. Gansu Sci. Technol. 2017, 46, 68–70. [Google Scholar]
  34. Pham, T.; Nguyenthihong, D.; Vovan, T. Improving the ANFIS Forecating Model for Time Series Based on the Fuzzy Cluster Analysis Algorithm. Int. J. Fuzzy Syst. Appl. IJFSA 2022, 11, 1–20. [Google Scholar] [CrossRef]
  35. Wang, Q.L.; Qiao, F.; Jiang, Y.H. Improved K-means algorithm based on aggregation distance parameter. Comput. Appl. 2019, 39, 2586–2590. [Google Scholar]
  36. Kumar, A.; Kumar, S. Color image segmentation via improved K-means algorithm. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 46–53. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, H.X.; Zhang, Y.E. K-means clustering color image segmentation method based on Lab space. J. Gannan Norm. Univ. 2019, 40, 44–48. [Google Scholar]
  38. Kumar, R.; Srivastava, R.; Srivastava, S. Image segmentation using hybrid color K-means approach. Int. J. Comput. Vis. Image Process. (IJCVIP) 2017, 7, 79–90. [Google Scholar] [CrossRef]
  39. Su, Y.L.; Ping, X.L.; Li, N. A planar extraction algorithm based on RANSAC 3D point cloud. Laser&Infrared 2019, 49, 780–784. [Google Scholar]
  40. Arslan, A.; Erteneinan, M. A comparative study for obtaining effective Leaf Area Index from single Terrestrial Laser Scans by removal of wood material. Measurement 2021, 178, 109262. [Google Scholar] [CrossRef]
  41. Yu, M.; Wenjuan, C. Optimization and Parallelization of Fuzzy Clustering Algorithm Based on the Improved Kmeans++ Clustering. IOP Conf. Ser. Mater. Sci. Eng. 2020, 768, 072106. [Google Scholar]
  42. Du, W.; Zhu, Y.; Li, S.; Liu, P. Spikelets detection of table grape before thinning based on improved YOLOV5s and Kmeans under the complex environment. Comput. Electron. Agric. 2022, 203, 385–392. [Google Scholar] [CrossRef]
Figure 1. Classification of green plums samples: (a) decay; (b) scar; (c) crack; (d) spot; (e) intact.
Figure 1. Classification of green plums samples: (a) decay; (b) scar; (c) crack; (d) spot; (e) intact.
Forests 14 00218 g001
Figure 2. Classification of green plum samples: (a) experimental device; (b) experimental principle. 1. Dark box; 2. plum sample; 3. light source; 4. camera light source bracket; 5. camera; 6. rotary table; 7. calibration board; 8. steel needle.
Figure 2. Classification of green plum samples: (a) experimental device; (b) experimental principle. 1. Dark box; 2. plum sample; 3. light source; 4. camera light source bracket; 5. camera; 6. rotary table; 7. calibration board; 8. steel needle.
Forests 14 00218 g002
Figure 3. Set of 36 multi-angle rain spot plum images.
Figure 3. Set of 36 multi-angle rain spot plum images.
Forests 14 00218 g003
Figure 4. Block schematic of improved plum 3D reconstruction algorithm.
Figure 4. Block schematic of improved plum 3D reconstruction algorithm.
Forests 14 00218 g004
Figure 5. Comparison of sparse and dense point cloud reconstructions of green plum. (a) Rain spot green plum image. (b) Improved SFM sparse reconstruction. (c) Dense reconstruction.
Figure 5. Comparison of sparse and dense point cloud reconstructions of green plum. (a) Rain spot green plum image. (b) Improved SFM sparse reconstruction. (c) Dense reconstruction.
Forests 14 00218 g005
Figure 6. Block schematic of the detection of micro defects in green plums.
Figure 6. Block schematic of the detection of micro defects in green plums.
Forests 14 00218 g006
Figure 7. Distribution of plum and background point cloud in different color space. (a) The distributions of the background point cloud and the green plum point cloud in RGB color space. (b) The distributions of the background point cloud and the green plum point cloud in Lab color space.
Figure 7. Distribution of plum and background point cloud in different color space. (a) The distributions of the background point cloud and the green plum point cloud in RGB color space. (b) The distributions of the background point cloud and the green plum point cloud in Lab color space.
Forests 14 00218 g007
Figure 8. Before and after green plum background segmentation. (a) The results before background segmentation. (b) The results after background segmentation.
Figure 8. Before and after green plum background segmentation. (a) The results before background segmentation. (b) The results after background segmentation.
Forests 14 00218 g008
Figure 9. Plane fitting based on RANSAC. (a,b) Plane fitting on each cluster of two different rain spots. (c,d) The cluster were projected onto the plane along the normal direction.
Figure 9. Plane fitting based on RANSAC. (a,b) Plane fitting on each cluster of two different rain spots. (c,d) The cluster were projected onto the plane along the normal direction.
Forests 14 00218 g009
Figure 10. Point clouds translation to the XOY plane by decentralization and rotation. (a,b) The different point clouds were decentered and rotated to the XOY plane.
Figure 10. Point clouds translation to the XOY plane by decentralization and rotation. (a,b) The different point clouds were decentered and rotated to the XOY plane.
Forests 14 00218 g010
Figure 11. Comparison of different algorithms for plum rain spot segmentation: (a) 3D point cloud; (b) traditional K—means algorithm; (c) K—means++ algorithm; (d) proposed method.
Figure 11. Comparison of different algorithms for plum rain spot segmentation: (a) 3D point cloud; (b) traditional K—means algorithm; (c) K—means++ algorithm; (d) proposed method.
Forests 14 00218 g011
Figure 12. Plum rain spot clustering result. (a) 3D point cloud; (b) Primitive european clustering (c) European clustering through different color channels.
Figure 12. Plum rain spot clustering result. (a) 3D point cloud; (b) Primitive european clustering (c) European clustering through different color channels.
Forests 14 00218 g012
Figure 13. Fitting results of micro-defects with different curve control points and orders. (a) The fitting of different control points when the curve order was 3. (b) The fitting of different control points when the curve order was 4.
Figure 13. Fitting results of micro-defects with different curve control points and orders. (a) The fitting of different control points when the curve order was 3. (b) The fitting of different control points when the curve order was 4.
Forests 14 00218 g013
Table 1. Feature point matching results for different thresholds.
Table 1. Feature point matching results for different thresholds.
ThresholdsTime (s)PointsError PointsError Rate
0.59.528811.14
0.69.8810654.72
0.710.0615985.03
0.812.92272259.19
0.918.675067013.83
Table 2. Experimental results of different SFM reconstruction algorithms.
Table 2. Experimental results of different SFM reconstruction algorithms.
AlgorithmImages T t o t a l Range   Error   R E ¯ Pose   Error   P E ¯
Bundler3676.320.0350.973
Visual SFM3638.930.0210.692
Baseline Version of OpenMVG3635.370.0220.675
Proposed Approach3626.550.0190.631
Table 3. Comparison of sparse and dense reconstructed models of the green plum point cloud.
Table 3. Comparison of sparse and dense reconstructed models of the green plum point cloud.
ModelTime (s)Data Size (MB)Points
Sparse point cloud reconstruction378.32221,405
Dense point cloud reconstruction12570.512,000,692
Table 4. Comparison of point cloud data classification results.
Table 4. Comparison of point cloud data classification results.
Color SpaceTraining SetTest Set
AccuracyBackground
Error
Green Plums
Error
AccuracyBackground
Error
Green Plums
Error
Lab98.7%0.01%2.4%99.4%0.0%1.3%
RGB99.7%0.1%0.7%98.6%0.0%2.9%
Table 5. Comparison of green plum rain spot defect area calculation results.
Table 5. Comparison of green plum rain spot defect area calculation results.
Sampling
Density
Circumference (mm)Area
(mm2)
RoundnessActual Area
(mm2)
Relative Error
18.55855.23990.89896.01370.1289
58.99225.92950.92150.0143
109.01225.96640.92310.0081
209.01605.97350.92340.0069
409.01695.97500.92350.0067
809.01705.97540.92350.0066
1009.01715.97540.92350.0066
1209.01715.97550.92350.0066
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Huo, L.; Liu, Y.; Zhuang, Z.; Yang, Y.; Gou, B. Research on 3D Phenotypic Reconstruction and Micro-Defect Detection of Green Plum Based on Multi-View Images. Forests 2023, 14, 218. https://doi.org/10.3390/f14020218

AMA Style

Zhang X, Huo L, Liu Y, Zhuang Z, Yang Y, Gou B. Research on 3D Phenotypic Reconstruction and Micro-Defect Detection of Green Plum Based on Multi-View Images. Forests. 2023; 14(2):218. https://doi.org/10.3390/f14020218

Chicago/Turabian Style

Zhang, Xiao, Lintao Huo, Ying Liu, Zilong Zhuang, Yutu Yang, and Binli Gou. 2023. "Research on 3D Phenotypic Reconstruction and Micro-Defect Detection of Green Plum Based on Multi-View Images" Forests 14, no. 2: 218. https://doi.org/10.3390/f14020218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop