Next Article in Journal
Synchronous End-to-End Vehicle Pedestrian Detection Algorithm Based on Improved YOLOv8 in Complex Scenarios
Previous Article in Journal
Design and Simulation of High-Performance D-Type Dual-Mode PCF-SPR Refractive Index Sensor Coated with Au-TiO2 Layer
Previous Article in Special Issue
Presenting a Multispectral Image Sensor for Quantification of Total Polyphenols in Low-Temperature Stressed Tomato Seedlings Using Hyperspectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Measurement of Seed Geometric Parameters Using a Handheld Scanner

1
School of Electronic Engineering, Chengdu Technological University, Chengdu 611730, China
2
Special Robot Application Technology Research Institute, Chengdu 611730, China
3
School of Geospatial Information, Information Engineering University, Zhengzhou 450001, China
4
School of Transportation Engineering, Shandong Jianzhu University, Jinan 250101, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(18), 6117; https://doi.org/10.3390/s24186117
Submission received: 27 May 2024 / Revised: 24 June 2024 / Accepted: 25 June 2024 / Published: 22 September 2024
(This article belongs to the Special Issue Novel Sensors for Precision Agriculture Application)

Abstract

:
Seed geometric parameters are important in yielding trait scorers, quantitative trait loci, and species recognition and classification. A novel method for automatic measurement of three-dimensional seed phenotypes is proposed. First, a handheld three-dimensional (3D) laser scanner is employed to obtain the seed point cloud data in batches. Second, a novel point cloud-based phenotyping method is proposed to obtain a single-seed 3D model and extract 33 phenotypes. It is connected by an automatic pipeline, including single-seed segmentation, pose normalization, point cloud completion by an ellipse fitting method, Poisson surface reconstruction, and automatic trait estimation. Finally, two statistical models (one using 11 size-related phenotypes and the other using 22 shape-related phenotypes) based on the principal component analysis method are built. A total of 3400 samples of eight kinds of seeds with different geometrical shapes are tested. Experiments show: (1) a single-seed 3D model can be automatically obtained with 0.017 mm point cloud completion error; (2) 33 phenotypes can be automatically extracted with high correlation compared with manual measurements (correlation coefficient (R2) above 0.9981 for size-related phenotypes and R2 above 0.8421 for shape-related phenotypes); and (3) two statistical models are successfully built to achieve seed shape description and quantification.

1. Introduction

Seeds are a fundamental form of survival, propagation, and reproduction for plants [1,2,3]. Seed phenotypes, including size- and shape-related phenotypes, are important in species recognition and classification [4], quality evaluation [5], optimization breeding [6], and yield evaluation [7]. The volume, surface area, length, width, thickness, cross-sectional perimeter, and area are size-related phenotypes [8,9]. The roundness, needle degree, flatness, shape factor, sphericity, elongation, circularity, and compactness are shape-related phenotypes [10,11,12,13]. Size-related phenotypes are often used for seed conditions and quality detection. Shape-related phenotypes are dimensionless and insensitive to the seed size, which is important in seed species recognition and classification. The traditional manual measurement of seed geometric parameters is no longer suitable for the needs of smart agriculture [14,15]. Therefore, it is meaningful to explore a novel method of seed geometric parameters automatic measurement.
High-throughput phenotyping is changing the conventional agricultural measurement methods [16]. It promotes the quantitative trait loci (QTL) study [17] and taxonomic analysis [18]. The phenotypes can be measured using different methods for different applications. The conventional manual measurement method using a vernier caliper can obtain the seeds’ length, width, and thickness. The manual measurement method is time-consuming, costly, and relies on manual experience, which is suitable for research involving a small number of samples [19]. Machine vision technology is often used for automatic measurement. The length, width, and projected perimeter and area can be obtained [9,13], and the elongation, circularity, and compactness can also be extracted using 2D images. It should be noted that these phenotypes are usually calculated based on the seed’s main projection profile (the maximum principal component profile) while the other two profiles, which are also important, are almost not discussed. Since the thickness, 3D volume, and 3D surface area cannot be acquired with 2D digital images, 3D technology has become an active area of research for seed automatic measurement. Structure from motion (SFM) is a classic way to obtain a 3D point cloud [20]. However, the data acquisition using the SFM method is usually consuming. 3D laser scanning outperforms in data acquisition. Li et al. [21] collected the single seed from four viewpoints using a 3D laser scanning system and registered the obtained point cloud using Geomagic Studio software to obtain the complete 3D point cloud of a single rice seed. Length, width, thickness, and volume were automatically extracted. However, the data acquisition of the seed point cloud was time-consuming, which was unsuitable for batch data acquisition for a large number of samples. Yan [22] adopted a Konica Minolta Vivid 910 3D scanner to obtain the single corn point cloud. The data were obtained in batches while the 3D reconstruction was completed by the software Geomagic Studio. The automation should be improved. Liu et al. [8] used X-ray computed tomography (CT) scanning to automatically obtain seed point clouds in batches and extract the length, width, thickness, radius, volume, surface area, compactness, and sphericity. However, they ignored the problem that the obtained 3D models were incomplete because the bottom part of the seed facing the table was not scanned.
The current active areas of research for seed automatic phenotyping using 3D scanners are batch data acquisition, 3D reconstruction of a single seed, and phenotype estimation. The difficulty is to achieve automatic measurement and batch data processing simultaneously [23]. One of the key problems is to obtain the complete 3D model of a single seed based on the obtained incomplete point cloud by a novel method, namely, to explore an effective and robust approach for point cloud completion. Kazhdan et al. [24] adopted an implicit fitting strategy to obtain a complete shape surface. It is only suitable for cases where a small number of point clouds are missing and easy to overfit. Zhang et al. [25] used an object retrieval method to obtain the complete point cloud. It relies on the huge number of models in the database and the rich model types. It is difficult to complete models that are not in the database. Zhang et al. [26] introduced generative adversarial network (GAN) inversion to shape completion. It depends on the input database and has a limitation on the storage space of the graphics processing unit (GPU), which will affect both the measurement accuracy and the four processing times. Thus, it is necessary to study an effective and practical point cloud completion method.
Size- and shape-related phenotypes can be calculated based on the single-seed 3D model. Seed shape description and quantification will be more adequate with more numbers and types of phenotypes. Recent methods for seed shape description and quantification use geometric models to represent seed shapes [23,27], which combine computer vision technologies with statistical algorithms. They may be efficient and robust for seed discrimination in a range of plant species and varieties. However, various geometric models are required for different types of seeds with different shapes. Typical geometric models are sphere, ellipse, oval, heart-shaped, kidney-shaped, cardioid, Fibonacci’s spiral contour, and lenses of varied proportions [28]. However, it is difficult for geometric models to describe irregular geometry seeds such as broad beans and peanuts. Thus, it is important to build a unified shape model to describe different seeds with different shapes. The statistical model is a useful tool to quantitatively describe an object’s shape [29]. Various 2D phenotypes have been used for building statistical shape models [30,31,32], while 3D phenotypes were rarely used, and the number and types of phenotypes were limited. Therefore, it is meaningful and practical to explore a unified seed shape description and quantification method using statistical shape models based on phenotypes derived from a single-seed 3D model.
In our previous work, a symmetry-based 3D reconstruction method is proposed, which is only suitable for seeds with symmetrical shapes [33]. A more effective and robust point cloud completion method will be handled in this work. The purpose of this work is to achieve seed automatic phenotyping. Key problems in point cloud-based phenotyping research, namely, batch four acquisition and processing, point cloud completion, and automatic phenotype estimation, will be handled. The batch data acquisition for seeds’ point clouds is designed to be conducted by a handheld laser scanner (RigelScan Elite). An automatic cloud-based phenotyping pipeline of single-seed segmentation poses normalization, point cloud completion by a least-squares ellipse fitting method, Poisson surface reconstruction, and automatic trait estimation is proposed. Two statistical models (one using 11 size-related phenotypes and the other using 22 shape-related phenotypes) based on the principal component analysis method are built. Eight kinds of seeds with 3400 samples will be tested. The experiment objects are broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans. The main contribution of this paper is to propose an automatic phenotyping method and achieve shape description and quantification by size- and shape-related statistical models.

2. Materials and Methods

The flowchart for automatic measurement of seed geometric parameters based on handheld scanners is shown in Figure 1. First, a handheld 3D laser scanner is used for batch 3D point cloud acquisition. Second, an automatic point cloud-based phenotyping pipeline, including point cloud processing and phenotype estimation, is proposed. Point cloud processing mainly includes single-seed segmentation, poses normalization, point cloud completion, and surface reconstruction. Specifically, the single-seed segmentation is realized by combining random sample consensus (RANSAC) segmentation method, the region growing segmentation method, and the dimension feature detection method. Poses normalization adopts the principal component analysis (PCA) rotation method. Point cloud completion is implemented using an ellipse fitting-based point cloud completion method. Surface reconstruction is conducted using the Poisson surface reconstruction method. A single-seed 3D model and 33 phenotypes, including 11 size-related phenotypes and 22 shape-related phenotypes, can be automatically obtained. Finally, two statistical models, one using size-related phenotypes and the other using shape-related phenotypes are established based on the PCA method to implement the shape description and quantification. The weights of size- and shape-related phenotypes are discussed.

2.1. Data Acquisition

The data acquisition obtained the seeds’ 3D point cloud data. Dry broad bean, peanut, pinto bean, soybean, black bean, red bean, and mung bean seeds were used as experiment objects. The materials were bought from the market and were of good quality and each kind had uniform samples of similar size and shape. No shriveled seeds were involved. Data acquisition was performed using a handheld 3D laser scanner (RigelScan Elite, made by Zhongguan Automation Technology Co., Ltd., Wuhan, China) indoors, in Wuhan, China, in January 2022. The RigelScan Elite scanner’s working principle is the triangulation method. RigelScan Elite scanner has a laser light-emitting diode and two visible-light cameras. The laser diode can emit three modes of laser, namely 11 pairs of laser line modes for rough scanning, 5 pairs of laser line modes for small area scanning, and 1 deep-hole mode for detailed scanning. The laser lights are focused and projected onto the target through a lens and the reflected or diffused laser lights are imaged by the two visible-light cameras. The location and spatial information of the target are obtained according to a certain triangle relationship.
The seeds of broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans were on the table and scanned in batches. A total of 300 samples for broad beans, peanuts, and pinto beans and 500 samples for soybeans, black beans, red beans, and mung beans were used because of the limitation of the table area. A few samples were sticking together and there were no heavily overlapping and pasting seeds. Each kind was scanned once. The seeds were on the table and scanned using RigelScan Elite (Figure 2a). The scanning angles were arbitrary and random. Some reflective markers were pasted on the table before scanning, which was used for point cloud stitching between multiple frames (Figure 2c). The frame rate was 1,050,000 times/s and the field depth was 550 mm. The scanning process was monitored from the computer in real time as it was performed.
The obtained point cloud can be directly output from the computer after the data scanning, as shown in Figure 2e (filtering 50% for effective visualization because of 0.01 mm cloud density). As shown in Figure 2d, the output point cloud has no color information and the point cloud of the seed bottom part is incomplete because one side of the seeds is facing the table. The scanned point cloud includes the table and seeds data.

2.2. Point Cloud-Based Phenotyping

2.2.1. Point Cloud Processing

The point cloud processing should be handled before the extraction of phenotypes, which is designed to obtain a single-seed 3D model. It mainly includes single-seed segmentation, pose normalization, point cloud completion, and surface reconstruction.
(1) Single-seed segmentation
Single-seed segmentation is to extract a single-seed point cloud from the background (table point cloud). The RANSAC [34] plane detection method is adopted to remove the table points. The distance threshold here is 0.05 mm. Therefore, most table points are removed while some table side points are reserved, as shown in Figure 3b. Then the region-growing segmentation method [35] based on smooth and curvature characters is used to extract the single-seed point clouds. Here the smoothness threshold is 150°/180π and the curvature threshold is 0.4 by experience. A series of clusters is obtained. Some clusters of the table edge points are reserved, as shown in Figure 3c (details seen in Figure 3e). Noticing that the reserved table points clusters are planar and the single-seed point clouds are three-dimensional. Then the dimensional features [36] are used to remove these preserved table points. Performing principal component analysis processing on each cluster, the eigenvalues of the three principal component dimensions λ1, λ2, and λ3 (λ1λ2λ3) can be obtained. The dimension features of each point cloud are calculated as follows:
a 1 D = λ 1 λ 2 λ 1 ,   a 2 D = λ 2 λ 3 λ 1 ,   and   a 3 D = λ 3 λ 1 ,
where a1D, a2D, and a3D are a one-dimensional linear feature, two-dimensional planar feature, and three-dimensional scattered point feature, respectively. The sum of a1D, a2D, and a3D is 1. The point clouds are classified as linear, planar, or three-dimensional based on the relationship of λ1, λ2, λ3, a1D, a2D, and a3D. A point cloud is linear when a1D is larger than a2D, and a3D and λ1 >> λ2 ≈ λ3 simultaneously. It is planar when a2D is larger than a1D, and a3D and λ1 ≈ λ2 >> λ3. It is three-dimensional when a3D is the largest among a1D, a2D, and a3D and λ1 ≈ λ2 ≈ λ3. The point clouds with the planar feature are removed and those with a three-dimensional scattered point feature are reserved. These reserved point clouds are seed point clouds, as shown in Figure 3d (details seen in Figure 3f).
(2) Pose normalization
It is necessary to normalize the measurement pose of individual seeds in the world coordinate system to simplify the calculation of the seeds’ kernel traits. Here, the coordinate rotation is conducted using the PCA method:
[ x y z ] = [ e v 1 , e v 2 , e v 3 ] [ x 0 y 0 z 0 ]
where ev1, ev2, and ev3 are three eigenvectors in descending order by performing PCA processing on seed point clouds, respectively. The measurement poses of individual seeds in the world coordinate system are the same after the coordinate rotation: (1) the geometric centre of the scanned seed point cloud overlaps the origin of the world coordinate system; (2) and the seed’s length, width and thickness directions are the X-, Z- and Y-axis directions of the world coordinate system, respectively, as shown in Figure 3g,h. Therefore, the first, second, and third principal component profiles are the three main important profiles for seed shape description and quantification in plant research.
(3) Point cloud completion
It can be observed that the obtained single-seed point cloud is incomplete and lacks the data of the seed facing the table, as shown in Figure 2d. The challenge is to obtain a complete 3D model from the incomplete scanned point cloud, namely, to explore an effective and robust point cloud completion method. Seeds are rigid and their longitudinal profiles are approximately ellipsoidal [18,37]. Figure 4 shows the seed longitudinal profile (YOZ) contour fitted by the B-spline curve, circle, and least-squares ellipse. For seeds with irregular geometric shapes, such as broad beans, peanuts, and kidney beans, the least-squares ellipse fitting based on the longitudinal profile points is constructive and robust, while the B-spline curve and circle fitting fail. For seeds with regular geometric shapes, such as soybeans, peas, black beans, red beans, and mung beans, all three fitting methods are applicable. The ellipse fitting results outperform those of the B-spline curve and the circle fitting. Therefore, this paper exploits the seed ellipsoidal profile characteristic to conduct 3D point cloud completion.
The 3D point cloud completion method proposed in this paper mainly includes four steps. Let the single-seed point cloud be denoted PC. First, a series of sliced point clouds, namely the scanned incomplete profile point clouds, PC1, PC2, …, PC20, is obtained by dividing PC into 20 pieces along the X-axis, as shown in Figure 5a. Here, PC = {PC1, PC2, …, PC20}. Second, an ellipse is fitted using a sliced point cloud (PCi) (Figure 5b) in PC using the least square fitting ellipse method [38] based on the incomplete profile point cloud PCi, as shown in Figure 5c. This ellipse fitting method incorporates the ellipticity constraint into the normalization factor by minimizing the algebraic distance subject to the constraint 4acb2 = 1. Since this ellipse fitting method is ellipse-specific, even heavily scattered points will always return an ellipse. In addition, it can be solved effectively and practically by a generalized eigensystem. Third, a complete profile point cloud PRi is filled by PCi (red points in Figure 5d) and parts points of the fitted ellipse PEi (blue points in Figure 5d). Here PRi = {PCi, PEi}. Then, a series of profile point clouds, PR1, PR2, …, PR20, is automatically obtained. The complete point cloud of single-seed PR is reconstructed and PR = {PR1, PR2, …, PR20}, as shown in Figure 5e–g. Finally, the reconstructed point cloud is filtered by the Gaussian filtering method [39] for uniform point cloud density, as shown in Figure 5h. Gaussian filter is a nonlinear filtering method that uses the principle of weighted average to filter point clouds, which can effectively smooth and denoise point clouds.
(4) Surface reconstruction
The classic Poisson surface reconstruction method [40], which has the advantages of global and local fitting, is adopted to measure seeds’ volume and surface area. It is based on the Poisson equation, which is derived from the Laplace equation, namely the potential equation calculated by
2 φ x 2 + 2 φ y 2 + 2 φ z 2 = 0 ,
where x, y, and z are the coordinate values of the points, and φ is a real-valued function that is twice differentiable in x, y, and z. The Poisson equation introduces a concept related to gravity, that is, if there is an observation point P in a space filled with gravitational medium, the Laplace equation can be changed to
2 φ x 2 + 2 φ y 2 + 2 φ z 2 = 4 π G ρ ,
where ρ is the mass density and G is the gravity acceleration. The Laplace equation can be transformed as follows through mathematical identity transformation:
Δ φ = 4 π G ρ ,
where Δ represents the Laplace operator and the Poisson equation is modified:
Δ φ = f .
where f is the mass distribution of the gravitational field.
Figure 5i,j show the triangle mesh, and surface visualization of one peanut’s 3D model fitted by the Poisson surface reconstruction method.

2.2.2. Phenotype Estimation

Seed volume (V) and surface area (S) are important 3D size-related phenotypes in shape description and quantification [8,10,11,17]. V can be regarded as the volume of a closed space enclosed by a triangular mesh, namely the sum of the projected volumes of all signed triangular patches. V can be regarded as the total surface area of the triangular mesh and S can be regarded as the total surface area of the triangular mesh (Figure 6a). Length (L), width (W), and thickness (T) are three directly size-related phenotypes for size description and quantification [3,9,10,11,29]. Here L, W, and T are computed using the AABB box algorithm. Specifically, L, W, and T can be seen as the length, width, and height of the AABB box (Figure 6a). Perimeter (C) and area (A) of the cross-section are important size-related phenotypes for seed profile shape description and quantification [3,8,9,10]. Three principal component profiles that are detected by performing PCA processing on the reconstructed point cloud are the most important three main profiles in seed shape analysis. These three principal component profiles are perpendicular to each other across the seed center (Figure 6b). specifically, C1, C2, C3, A1, A2, and A3 are the perimeter and area of the first, second, and third profiles. The details of calculation algorithms for V, S, L, W, T, C1, C2, C3, A1, A2, and A3 can be seen in our previous work [33].
According to the related research [9,11,41,42], 11 size-related phenotypes and 22 shape-related phenotypes are automatically calculated in this paper, as listed in Table 1.
Shape-related phenotypes are dimensionless and insensitive to region size. The radius ratio (RR) is an important feature that identifies the seed shape [10]. Here, RR = dMAX/dMIN, where dMAX and dMIN are points of seeds with the maximum and minimum distance from the center of mass. Roundness (R) describes the sharpness of the corners [11]. The needle degree (ND), flatness (F), shape factor (SF), and sphericity (SP) of seeds are the quantitative indexes of the seeds [10]. Elongation (E), circularity (CR), and compactness (CP) are used to describe profile shape features. Elongation is the difference between the lengths of the major and minor axes of the best-fit ellipse divided by the sum of lengths. It is zero for a circle and one for a long and narrow ellipse. Circularity indicates the compactness of a region to a certain degree. Compactness describes the resemblance of the object to a square shape. The value of compactness will be one for a perfect square. The bounding rectangle to the perimeter (BR) indicates the convex feature of the profile. Its value will equal one for a convex shape object. The geometric mean (D) [11] is calculated by considering the spherical shape of a seed. Here, D = (LWT)1/3.

2.3. Shape Description and Quantification Based on Statistical Models

The statistical shape model based on the PCA method is a useful tool to quantitatively describe an object’s size and shape [43]. The principal component is to recombine multiple variables (X1, X2, …, Xi) with strong linear correlation to generate a few variables (F1, F2, …, Fi) that are not related to each other so that they can extract as much information as possible from the original variables. F1, F2, …, Fi, are called principal components, which are the first, second, …, and the i-th principal component, respectively. The process of building a statistical shape model based on the PCA method is as follows:
(1) Construct a sample matrix
X = [ x 11 x 12 x 1 p x 21 x 22 x 2 p x n 1 x n 2 x n p ] ,
where xij represents the value of the j-th variable in the i-th group of sample data.
(2) Transform the sample matrix
Y = [ y i j ] n × p ,
where y i j = { x i j , for   positive   index x i j , for   negative   index
(3) Perform a standardized transformation on Y so that Y becomes a standardized matrix Z
Z = [ z 11 z 12 z 1 p z 21 z 22 z 2 p z n 1 z n 2 z n p ] ,
where zij = (yij − yj)/sj, yj, and sj are the mean and standard deviation of the j-th column in the Y matrix, respectively.
(4) Calculate the sample correlation coefficient matrix of the standardized matrix Z
R = Z T Z n 1 ,
(5) Find the eigenvalues
| R λ I p | = 0 ,
where Ip is the identity matrix of row p and column p, λ is characteristic value and λ1 > λ2 > … > λp > 0.
(6) Calculate the value of m to make sure the utilization rate of information reaches more than 80%.
j = 1 m λ j j = 1 p λ j 0.8 ( j = 1 , 2 , , p ) ,
(7) Find the principal components of zi = (zi1, zi2, …, zip)T
u i j = z i T b j 0 ( j = 1 , 2 , , m ) ,
where b j 0 is the variance explanatory power of the j-th principal component. Then calculate the decision matrix U
U = [ u 11 u 12 u 1 m u 21 u 22 u 2 m u p 1 u p 2 u p m ] ,
where uij is the j-th principal component vector of the i-th variable.
(8) Assuming that the number of indicators with undetermined weights is wi, establish a primary weight model, namely the principal component model
{ F 1 = u 11 w 1 + u 21 w 2 + + u L 1 w L F 2 = u 12 w 1 + u 22 w 2 + + u L 2 w L F m = u 1 m w 1 + u 2 m w 2 + + u L m w L ,
where F1, F2, …, and Fm are the m principal components obtained after analysis; uij are the coefficients in the decision matrix. Establish a comprehensive evaluation function
F z = a 1 v 1 + a 2 v 2 + + a L v L ,
where a1, a2, …, aL are the comprehensive importance of indicators and v1, v2, …, vL are the features. Fz is the statistical shape model based on the PCA method. Here two statistical shape models, one for seed size based on morphological traits and the other for seed shape based on shape features, are built for quantitative description of seeds. It should be noted that the sum of a1, a2, …, aL is beyond 1. To normalize the weights, the comprehensive value of the original index score can be calculated
V z i = j = 1 L a j p i j ,
where pij is the comprehensive score model coefficient and i = 1, 2, …, h. Therefore, the weight of each indicator is
w i = V Z i i = 1 h V Z i .

2.4. Accuracy Analysis

Data scanning, segmentation, 3D point cloud completion, surface reconstruction, and trait estimation are the main factors affecting measurement accuracy. The scanning accuracy R_scan is usually measured by the ratio of scanned seeds number N2 and total seeds number N1: R_scan = N2/N1 × 100%. The segmentation accuracy R_seg is usually measured by the ratio of extracted seeds number N3 and the input seeds number, namely, scanned seeds number N2: R_seg = N3/N2 × 100%.
The reconstruction error, defined as the average distance between the true point and the closest reconstructed point, is used to verify the accuracy of the 3D point cloud completion. The detail of the reconstruction error can be seen in the paper [33].
There will be a certain error between the completed point cloud and the real point cloud of the seed. This paper compares the completed point cloud with the real seed point cloud to calculate the point cloud completion error. The real point cloud of the seeds is selected from 20 seeds of each kind, fixed on the desktop with a needle, and scanned 360 degrees with RigelScan Elite. Each seed is scanned separately to obtain a complete real seed point cloud. Er is used to represent the completion error:
E r = 1 n p i = 1 n d ( P c i , P g j ) ,
where np is the real point cloud number and d(Pci, Pgj) is the distance between point Pci (a point in the completed point cloud) and its closest point Pgj (a point in the real point cloud).
Regression analysis between the automatically measured phenotypes and the manually measured values is conducted to illustrate the trait measurement accuracy. A total of 20 samples of each kind (the same seeds for 3D reconstruction accuracy verification) were measured manually. The ground truths of length, width, and thickness were obtained using a vernier caliper. The other traits were measured by the software Geomagic Studio based on the real 3D point cloud. All the ground truths were manually measured three times by three people, and the average was adopted.

3. Results

3.1. Data Scanning and Segmentation Results

The seed point clouds can be obtained in batches using a handheld 3D laser scanner (RigelScan Elite). The single-seed point cloud can be automatically extracted with our proposed segmentation method. Figure A1 shows the scanning and segmentation results of broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans. The obtained point clouds are rendered for effective visualization (Figure A1b) while they are with no color information. They are incomplete with no data facing the table part (Figure A1d). It illustrates that the seeds are well scanned, and the single seed, including the sticking seeds (some samples in Figure A1e), can be successfully extracted.
Table 2 lists the numbers of the total successfully scanned and segmented seeds and the corresponding scanning accuracy and segmentation accuracy. It shows that 3396 samples from 3400 samples were successfully scanned. The scanning accuracies of broad beans, peanuts, soybeans, black beans, and mung beans are 100%, while the scanning accuracies of pinto beans and red beans are 99.67% and 99.40%, respectively. The average scanning accuracy of all the kinds of seed is 99.88%, and the corresponding average segmentation accuracy is 100%. The high scanning accuracy proves the handheld 3D laser scanner (RigelScan Elite) is effective for point cloud data acquisition in batches. The high segmentation accuracy verifies that the proposed segmentation method is practical and robust for obtaining single-seed point clouds.

3.2. Point Cloud Completion Results

The complete 3D model of a single seed based on the incomplete scanned point clouds can be directly obtained using an ellipse-based point cloud completion method. Figure 7 shows 3D point cloud completion results. It shows that the obtained seed point clouds are incomplete. Our reconstructed point clouds are complete and well reconstructed and the corresponding 3D mesh models are well fitted. The sizes of broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans are in descending order. Broad beans, peanuts, and pinto beans have irregular geometrical shapes, whereas soybeans, peas, and black beans have spherical seeds, and red and mung beans have ellipsoid seeds.
Figure 8 presents the 3D point cloud completion errors. The mean values of 3D reconstruction errors for broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans are 0.023, 0.021, 0.018, 0.014, 0.016, 0.016, 0.013, and 0.012 mm, respectively. The corresponding standard deviations are 0.013, 0.011, 0.01, 0.007, 0.007, 0.007, 0.004, and 0.003, respectively. The average 3D reconstruction accuracy and standard deviation for eight kinds of seeds are 0.017 mm and 0.008 mm, respectively. The visualization of 3D modeling results and the small values of completion errors show that the proposed point cloud completion method is constructive and robust for seed 3D modeling.

3.3. Phenotype Estimation Results

Eleven size-related phenotypes and 22 shape-related phenotypes can be automatically extracted based on the reconstructed 3D seed model by automatic trait estimation algorithms. Figure A2 shows seed phenotype results of broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans. The range, mean values, and the corresponding standard deviation are in detail. It can be noted that different types of seeds have different phenotype values. The shape-related phenotypes have a smaller deviation compared with the size-related phenotypes. The size-related phenotype values of broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans decrease. The shape-related phenotype values vary due to different phenotype types.
Figure A3 presents the comparison between the automatically and manually measured seed phenotypes using simple linear regression. It illustrates that size-related phenotypes have higher correlations (R2 above 0.9981) than shape-related phenotypes (R2 above 0.8421). The measurement accuracy is to the sub-millimeter.

3.4. Shape Description and Quantification Results

Size- and shape-related statistical models based on the PCA method can be obtained using the proposed method. F1m–F8m are seed statistical shape models using 11 size-related phenotypes for broad beans, peanuts, pinto beans, soybeans, black beans, red beans, and mung beans. F1s–F8s are seed statistical shape models using 22 shape-related phenotypes.
F 1 m = 0.1045 X 01 + 0.096 X 02 + 0.0893 X 03 + 0.1033 X 04 + 0.0846 X 05 + 0.0683 X 06 + 0.1053 X 07 + 0.0656 X 08 + 0.0777 X 09 + 0.1022 X 010 + 0.1033 X 011
F 2 m = 0.1118 X 01 + 0.1039 X 02 + 0.1013 X 03 + 0.1189 X 04 + 0.0819 X 05 + 0.1208 X 06 + 0.0524 X 07 + 0.0513 X 08 + 0.1227 X 09 + 0.1087 X 010 + 0.0263 X 011
F 3 m = 0.1042 X 01 + 0.1128 X 02 + 0.0906 X 03 + 0.1119 X 04 + 0.084 X 05 + 0.0797 X 06 + 0.077 X 07 + 0.05 X 08 + 0.1005 X 09 + 0.0873 X 010 + 0.102 X 011
F 4 m = 0.106 X 01 + 0.1051 X 02 + 0.1119 X 03 + 0.1174 X 04 + 0.0745 X 05 + 0.0743 X 06 + 0.0585 X 07 + 0.1159 X 08 + 0.0442 X 09 + 0.0925 X 010 + 0.0997 X 011
F 5 m = 0.1055 X 01 + 0.1051 X 02 + 0.1072 X 03 + 0.0837 X 04 + 0.1102 X 05 + 0.0802 X 06 + 0.074 X 07 + 0.0765 X 08 + 0.0943 X 09 + 0.0953 X 010 + 0.0679 X 011
F 6 m = 0.1083 X 01 + 0.1148 X 02 + 0.1043 X 03 + 0.122 X 04 + 0.0864 X 05 + 0.102 X 06 + 0.0569 X 07 + 0.0386 X 08 + 0.0561 X 09 + 0.0969 X 010 + 0.1137 X 011
F 7 m = 0.1048 X 01 + 0.1067 X 02 + 0.107 X 03 + 0.1047 X 04 + 0.0922 X 05 + 0.0819 X 06 + 0.0751 X 07 + 0.1121 X 08 + 0.0625 X 09 + 0.0647 X 010 + 0.0884 X 011
F 8 m = 0.1 X 01 + 0.0999 X 02 + 0.0993 X 03 + 0.0967 X 04 + 0.0941 X 05 + 0.0902 X 06 + 0.0901 X 07 + 0.0886 X 08 + 0.0879 X 09 + 0.0806 X 010 + 0.0726 X 011
F 1 s = 0.0654 X 1 + 0.0491 X 2 + 0.0263 X 3 + 0.0263 X 4 + 0.0683 X 5 + 0.0478 X 6 + 0.0845 X 7 + 0.0683 X 8 + 0.0791 X 9 + 0.0449 X 10 + 0.0686 X 11 + 0.0557 X 12 + 0.0109 X 13 + 0.0001 X 14 + 0.0348 X 15 + 0.0024 X 16 + 0.0127 X 17 + 0.0146 X 18 + 0.0792 X 19 + 0.0803 X 20 + 0.0406 X 21 + 0.0401 X 22
F 2 s = 0.0526 X 1 + 0.0422 X 2 + 0.0473 X 3 + 0.0411 X 4 + 0.0441 X 5 + 0.0488 X 6 + 0.0498 X 7 + 0.0397 X 8 + 0.0402 X 9 + 0.0104 X 10 + 0.0516 X 11 + 0.0746 X 12 + 0.0577 X 13 + 0.0717 X 14 + 0.0546 X 15 + 0.045 X 16 + 0.0711 X 17 + 0.0044 X 18 + 0.008 X 19 + 0.0332 X 20 + 0.0563 X 21 + 0.0558 X 22
F 3 s = 0.0669 X 1 + 0.0669 X 2 + 0.044 X 3 + 0.0869 X 4 + 0.0684 X 5 + 0.0686 X 6 + 0.0545 X 7 + 0.0559 X 8 + 0.0111 X 9 + 0.0105 X 10 + 0.0099 X 11 + 0.0581 X 12 + 0.0376 X 13 + 0.0549 X 14 + 0.0412 X 15 + 0.00 35 X X 16 + 0.0046 X 17 + 0.0803 X 18 + 0.0288 X 19 + 0.0202 X 20 + 0.036 X 21 + 0.091 X 22
F 4 s = 0.0649 X 1 + 0.0851 X 2 + 0.0538 X 3 + 0.0514 X 4 + 0.0487 X 5 + 0.0449 X 6 + 0.0241 X 7 + 0.0339 X 8 + 0.0446 X 9 + 0.0476 X 10 + 0.0472 X 11 + 0.0235 X 12 + 0.0253 X 13 + 0.0243 X 14 + 0.0959 X 15 + 0.0949 X 16 + 0.0009 X 17 + 0.0487 X 18 + 0.0503 X 19 + 0.0643 X 20 + 0.008 X 21 + 0.0177 X 22
F 5 s = 0.1141 X 1 + 0.1406 X 2 + 0.1418 X 3 + 0.0838 X 4 + 0.0025 X 5 + 0.0162 X 6 + 0.0105 X 7 + 0.02 X 8 + 0.0038 X 9 + 0.0744 X 10 + 0.0086 X 11 + 0.0137 X 12 + 0.0095 X 13 + 0.0075 X 14 + 0.0055 X 15 + 0.0173 X 16 + 0.1176 X 17 + 0.1165 X 18 + 0.0298 X 19 + 0.0327 X 20 + 0.0027 X 21 + 0.0309 X 22
F 6 s = 0.0532 X 1 + 0.0583 X 2 + 0.0472 X 3 + 0.0564 X 4 + 0.066 X 5 + 0.064 X 6 + 0.026 X 7 + 0.026 X 8 + 0.0882 X 9 + 0.0898 X 10 + 0.0907 X 11 + 0.0587 X 12 + 0.0085 X 13 + 0.0145 X 14 + 0.0449 X 15 + 0.0117 X 16 + 0.0137 X 17 + 0.0387 X 18 + 0.0413 X 19 + 0.036 X 20 + 0.0004 X 21 + 0.0658 X 22
F 7 s = 0.0398 X 1 + 0.0382 X 2 + 0.1072 X 3 + 0.0784 X 4 + 0.105 X 5 + 0.0269 X 6 + 0.0149 X 7 + 0.0136 X 8 + 0.0019 X 9 + 0.0582 X 10 + 0.0479 X 11 + 0.0999 X 12 + 0.1002 X 13 + 0.0331 X 14 + 0.0344 X 15 + 0.0058 X 16 + 0.0247 X 17 + 0.0323 X 18 + 0.0403 X 19 + 0.0184 X 20 + 0.0475 X 21 + 0.0311 X 22
F 8 s = 0.0689 X 1 + 0.0684 X 2 + 0.0538 X 3 + 0.0535 X 4 + 0.0312 X 5 + 0.0784 X 6 + 0.0343 X 7 + 0.0345 X 8 + 0.0717 X 9 + 0.0169 X 10 + 0.0514 X 11 + 0.0244 X 12 + 0.0002 X 13 + 0.0676 X 14 + 0.0676 X 15 + 0.0052 X 16 + 0.0012 X 17 + 0.0379 X 18 + 0.0685 X 19 + 0.0702 X 20 + 0.0738 X 21 + 0.0203 X 22
Figure 9 and Figure 10 show weight comparisons of different phenotypes in size- and shape-related statistical shape models. It can be found that different phenotypes of different types of seeds play different weights in the same statistical shape model. It can be seen that the weights of the same phenotypes of different species are different and the weights of different phenotypes of the same species also vary. Size- and shape-related statistical models are successfully built, which can provide a possible way to achieve seed shape description and quantification.

4. Discussion

A method for high-throughput seed phenotyping using a handheld 3D laser scanner is presented.
The incompleteness of data often affects the point cloud completion results. Generally speaking, the smaller the incompleteness is, the better the point cloud completion results will be. The results of the least-squares ellipse fitting of the contour line discrete points under different degrees of incompleteness of a peanut seed are shown in Figure 11. It can be seen that the variation of the ellipse fitting results based on the original scanned point cloud, the point cloud data with half of the data missing, and the point cloud data with three-fourths of the data missing is small.
The seed point clouds were obtained in batches with 0.01 mm point cloud density using a handheld 3D laser scanner RigelScan Elite. The ability to handle hundreds of samples in our work in batches outperforms related works [8,10] where dozens of samples are involved. It should be noted that the scanning angle in this work is arbitrary, which makes the data scanning more flexible. Data acquisition in this work still needs a manual operation. However, it is theoretically possible to use the robotic arm to conduct automatic data acquisition with several fixed scanning routes. The handheld 3D laser scanner will combine with the robotic arm to conduct automatic data acquisition in our future work. The experiments show that 3396 of the 3400 samples were successfully scanned. All seeds of broad bean, peanut, soybean, black bean, and mung bean are well scanned except for pinto and red bean seeds. The main factor is the seed surface reflection. The surface reflection of pinto and red bean seeds is heavier than that of broad bean, peanut, soybean, black bean, and mung bean. It should be noted that the background of the image could be changed to obtain more adequate scanning data for different species of seeds theoretically because the working principle of the RigelScan Elite scanner is the triangulation method. In this work, the dark grey background has no changes to conduct data acquisition because the scanned accuracy is high already. Experiments show that the segmentation accuracy is 100%. This high segmentation accuracy is because most of the seeds are isolated, and only a few seeds are sticking and there are no heavily overlapping or pasting seeds.
The complete single-seed 3D model can be directly obtained using an ellipse-based point cloud completion method with an average 3D reconstruction error of 0.017 mm. Figure 12 shows the comparisons of 3D models reconstructed by the screened Poisson surface reconstruction method [24], symmetry-based 3D reconstruction method [33], our proposed method based on the incomplete scanning point cloud, and commercial software (Geomagic Studio) based on the artificially obtained complete scanning point cloud. It shows that the 3D models built by the screened Poisson surface reconstruction method tend to be overfitting and have holes. The symmetry-based 3D reconstruction method only suits seeds with symmetrical shapes, such as soybeans, black beans, red beans, and mung beans, and it fails when seeds have irregular shapes like broad beans, peanuts, and pinto beans. The 3D models built by our proposed method are close to those built by Geomagic Studio. The experiments show that the 3D models built by our method are well fitted and reconstructed. The obtained 3D models are smooth and present the real seed shape well. It can be verified that the proposed 3D modeling method is effective and robust and highlights the related works.

5. Conclusions

The objective of the proposed method is to accomplish seed automatic phenotyping for the ideal case, without heavily overlapping and pasting seeds. The goal has been achieved by batch data acquisition using a handheld 3D laser scanner with 0.01 mm point cloud density (99.88% scanning accuracy) and an automatic pipeline of data processing, including single-seed extraction (100% segmentation accuracy), poses normalization, point cloud completion using an ellipse-based point cloud completion method (0.017 mm point cloud completion error), trait estimation (R2 above 0.9981 for size-related phenotypes and R2 above 0.8421 for shape-related phenotypes), and statistical models establish based on PCA method (one using size-related phenotypes and the other using shape-related phenotypes). Experiments on eight kinds of seeds with different shapes show that the ability of data acquisition of hundreds of samples, the well-fitted single-seed 3D models, the number and types of the extracted phenotypes, and the unified statistical models for seed shape description and quantification with our proposed method outperform than related works. The ability to handle batch data acquisition and processing, automatic phenotyping, and unified statistical models for seed shape description and quantification, have shown that the proposed method has the potential application for precision agriculture, such as high-throughput seed phenotyping, seed species recognition and classification, and yield trait scorer, etc.
This study can be improved by combining the handheld 3D laser scanner with a robotic arm to conduct automatic data acquisition. It should be noted that there are no heavily overlapping and pasting seeds in our experiment. Further research will explore a more effective segmentation method where seeds are heavily overlapped and pasted. More experiments on seed species recognition and classification using the built statistical models for seed shape description and quantification will be conducted to verify the potential application of the proposed method in the next work.

Author Contributions

Conceptualization, X.H. and B.Z.; methodology, X.H. and F.Z.; validation, X.H. and B.Z.; formal analysis, X.H. and F.Z.; writing—original draft preparation, X.H. and B.Z.; writing—review and editing, X.H. and B.Z.; visualization, X.H. and X.W.; supervision, X.H. and X.W.; funding acquisition, X.H. and B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The National Natural Science Foundation of China (42101446), China Postdoctoral Science Foundation (2022T150488), Special Scientific Research Fund for Doctoral Programs in Colleges and Universities (2023RC009), The Key Laboratory of China-ASEAN Satellite Remote Sensing Applications of the Ministry of Natural Resources (GDMY202308).

Data Availability Statement

Data and code from this research will be available upon request to the authors.

Acknowledgments

The authors sincerely thank anonymous reviewers and members of the editorial team for their comments. Thanks to Senior Engineer Liu for providing assistance in instrumentation for this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Scanning and Segmentation Results

Figure A1. Scanning and segmentation results: (a) seeds on the table ready for data scanning; (b) the obtained point clouds (rendered for effective visualization); (c) segmentation results; (d) the detailed display of the obtained point clouds in the red box area of (b), and (e) the detailed display of the obtained point clouds in the red box area of (c).
Figure A1. Scanning and segmentation results: (a) seeds on the table ready for data scanning; (b) the obtained point clouds (rendered for effective visualization); (c) segmentation results; (d) the detailed display of the obtained point clouds in the red box area of (b), and (e) the detailed display of the obtained point clouds in the red box area of (c).
Sensors 24 06117 g0a1
Figure A2. Seed phenotype measurement results.
Figure A2. Seed phenotype measurement results.
Sensors 24 06117 g0a2
Figure A3. Comparison between the automatically and manually measured phenotypes of seeds.
Figure A3. Comparison between the automatically and manually measured phenotypes of seeds.
Sensors 24 06117 g0a3

References

  1. Chen, Z.; Lancon-Verdier, V.; Le Signor, C.; She, Y.-M.; Kang, Y.; Verdier, J. Genome-wide association study identified candidate genes for seed size and seed composition improvement in M. truncatula. Sci. Rep. 2021, 11, 4224. [Google Scholar] [CrossRef] [PubMed]
  2. Hasan, S.; Furtado, A.; Henry, R. Analysis of domestication loci in wild rice populations. Plants 2023, 12, 489. [Google Scholar] [CrossRef] [PubMed]
  3. Yong, Q. Research on painting image classification based on transfer learning and feature fusion. Math. Probl. Eng. 2022, 2022, 5254823. [Google Scholar] [CrossRef]
  4. Loddo, A.; Loddo, M.; Di Ruberto, C. A novel deep learning based approach for seed image classification and retrieval. Comput. Electron. Agric. 2021, 187, 106269–106280. [Google Scholar] [CrossRef]
  5. Ashfaq, M.; Khan, A.S.; Ullah Khan, S.H.; Ahmad, R. Association of various morphological traits with yield and genetic divergence in rice (Oryza sativa). Int. J. Agric. Biol. 2012, 14, 55–62. [Google Scholar] [CrossRef]
  6. Sanwong, P.; Sanitchon, J.; Dongsansuk, A.; Jothityangkoon, D. High temperature alters phenology, seed development and yield in three rice varieties. Plants 2023, 12, 666. [Google Scholar] [CrossRef] [PubMed]
  7. Malinowski, D.P.; Rudd, J.C.; Pinchak, W.E.; Baker, J.A. Determining morphological traits for selecting wheat (Triticum aestivum L.) with improved early-season forage production. J. Adv. Agric. 2018, 9, 1511–1533. [Google Scholar] [CrossRef]
  8. Liu, W.; Liu, C.; Jin, J.; Li, D.; Fu, Y.; Yuan, X. High-throughput phenotyping of morphological seed and fruit characteristics using X-ray computed tomography. Front. Plant Sci. 2020, 11, 601475–601485. [Google Scholar] [CrossRef] [PubMed]
  9. Liang, X.; Wang, K.; Huang, C.; Zhang, X.; Yan, J.; Yang, W. A high-throughput maize kernel traits scorer based on line-scan imaging. Measurement 2016, 90, 453–460. [Google Scholar] [CrossRef]
  10. Feng, X.; He, P.; Zhang, H.; Yin, W.; Qian, Y.; Cao, P.; Hu, F. Rice seeds identification based on back propagation neural network model. Int. J. Agric. Biol. Eng. 2019, 12, 122–128. [Google Scholar] [CrossRef]
  11. Sharma, R.; Kumar, M.; Alam, M.S. Image processing techniques to estimate weight and morphological parameters for selected wheat refractions. Sci. Rep. 2021, 11, 20953–20965. [Google Scholar] [CrossRef] [PubMed]
  12. Chu, Y.; Chee, P.; Isleib, T.G.; Holbrook, C.C.; Ozias-Akins, P. Major seed size QTL on chromosome A05 of peanut (Arachis hypogaea) is conserved in the US mini core germplasm collection. Mol. Breed. 2020, 40, 6. [Google Scholar] [CrossRef]
  13. McDonald, L.; Panozzo, J. A review of the opportunities for spectral-based technologies in post-harvest testing of pulse grains. Legume Sci. 2023, 5, e175. [Google Scholar] [CrossRef]
  14. Juan, A.; Martín-Gómez, J.J.; Rodríguez-Lorenzo, J.L.; Janoušek, B.; Cervantes, E. New techniques for seed shape description in silene species. Taxonomy 2022, 2, 1–19. [Google Scholar] [CrossRef]
  15. Cervantes, E.; Martín Gómez, J.J. Seed shape description and quantification by comparison with geometric models. Horticulturae 2019, 5, 60. [Google Scholar] [CrossRef]
  16. Merieux, N.; Cordier, P.; Wagner, M.-H.; Ducournau, S.; Aligon, S.; Job, D.; Grappin, P.; Grappin, E. ScreenSeed as a novel high throughput seed germination phenotyping method. Sci. Rep. 2021, 11, 1404. [Google Scholar] [CrossRef] [PubMed]
  17. Zhu, Y.; Song, B.; Guo, Y.; Wang, B.; Xu, C.; Zhu, H.; Lizhu, E.; Lai, J.; Song, W.; Zhao, H. QTL analysis reveals conserved and differential genetic regulation of maize lateral angles above the ear. Plants 2023, 12, 680. [Google Scholar] [CrossRef]
  18. Cervantes, E.; Martin Gomez, J.J. Seed shape quantification in the order Cucurbitales. Mod. Phytomorphol. 2018, 12, 1–13. [Google Scholar] [CrossRef]
  19. Rahman, A.; Cho, B.-K. Assessment of seed quality using non-destructive measurement techniques: A review. Seed Sci. Res. 2016, 26, 285–305. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Ma, X.; Guan, H.; Zhu, K.; Feng, J.; Yu, S. A method for calculating the leaf inclination of soybean canopy based on 3D point clouds. Int. J. Remote Sens. 2021, 42, 5721–5742. [Google Scholar] [CrossRef]
  21. Li, H.; Qian, Y.; Cao, P.; Yin, W.; Dai, F.; Hu, F.; Yan, Z. Calculation method of surface shape feature of rice seed based on point cloud. Comput. Electron. Agric. 2017, 142, 416–423. [Google Scholar] [CrossRef]
  22. Yan, H. 3D scanner-based corn seed modeling. Appl. Eng. Agric. 2016, 32, 181–188. [Google Scholar] [CrossRef]
  23. García-Lara, S.; Chuck-Hernandez, C.; Serna-Saldivar, S.O. Development and structure of the Corn Kernel. In Corn, 3rd ed.; Serna-Saldivar, S.O., Ed.; AACC International Press: Oxford, UK, 2019; pp. 147–163. [Google Scholar] [CrossRef]
  24. Kazhdan, M.; Chuang, M.; Rusinkiewicz, S.; Hoppe, H. Poisson surface reconstruction with envelope constraints. Proc. Comput. Graph. Forum 2020, 39, 173–182. [Google Scholar] [CrossRef]
  25. Zhang, W.; Xiao, C. PCAN: 3D attention map learning using contextual information for point cloud based retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12436–12445. [Google Scholar] [CrossRef]
  26. Zhang, J.; Chen, X.; Cai, Z.; Pan, L.; Zhao, H.; Yi, S.; Yeo, C.K.; Dai, B.; Loy, C.C. Unsupervised 3D shape completion through GAN inversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 1768–1777. [Google Scholar] [CrossRef]
  27. Cervantes, E.; Martín, J.J.; Chan, P.K.; Gresshoff, P.M.; Tocino, Á. Seed shape in model legumes: Approximation by a cardioid reveals differences in ethylene insensitive mutants of Lotus japonicus and Medicago truncatula. J. Plant Physiol. 2012, 169, 1359–1365. [Google Scholar] [CrossRef] [PubMed]
  28. Chang, F.; Lv, W.; Lv, P.; Xiao, Y.; Yan, W.; Chen, S.; Zheng, L.; Xie, P.; Wang, L.; Karikari, B. Exploring genetic architecture for pod-related traits in soybean using image-based phenotyping. Mol. Breed. 2021, 41, 28. [Google Scholar] [CrossRef]
  29. Sun, X.; Wang, H.; Wang, W.; Li, N.; Hämäläinen, T.; Ristaniemi, T.; Liu, C. A statistical model of spine shape and material for population-oriented biomechanical simulation. IEEE Access 2021, 9, 155805–155814. [Google Scholar] [CrossRef]
  30. Martín-Gómez, J.J.; Rewicz, A.; Rodríguez-Lorenzo, J.L.; Janoušek, B.; Cervantes, E. Seed morphology in silene based on geometric models. Plants 2020, 9, 1787. [Google Scholar] [CrossRef] [PubMed]
  31. Feng, L.; Zhu, S.; Liu, F.; He, Y.; Bao, Y.; Zhang, C. Hyperspectral imaging for seed quality and safety inspection: A review. Plant Methods 2019, 15, 91. [Google Scholar] [CrossRef] [PubMed]
  32. Frangi, A.F.; Rueckert, D.; Schnabel, J.A.; Niessen, W.J. Automatic construction of multiple-object three-dimensional statistical shape models: Application to cardiac modeling. IEEE Trans. Med. Imaging 2002, 21, 1151–1166. [Google Scholar] [CrossRef]
  33. Huang, X.; Zheng, S.; Zhu, N. High-throughput legume seed phenotyping using a handheld 3D laser scanner. Remote Sens. 2022, 14, 431. [Google Scholar] [CrossRef]
  34. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Proc. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  35. Li, D.; Yan, C.; Tang, X.S.; Yan, S.; Xin, C. Leaf segmentation on dense plant point clouds with facet region growing. Sensors 2018, 18, 3625. [Google Scholar] [CrossRef] [PubMed]
  36. Kim, M.; Lee, D.; Kim, T.; Oh, S.; Cho, H. Automated extraction of geometric primitives with solid lines from unstructured point clouds for creating digital buildings models. Autom. Constr. 2023, 145, 104642. [Google Scholar] [CrossRef]
  37. Cervantes, E.; Martín, J.J.; Saadaoui, E. Updated methods for seed shape analysis. Scientifica 2016, 2016, 5691825. [Google Scholar] [CrossRef] [PubMed]
  38. Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
  39. Kulikov, V.A.; Khotskin, N.V.; Nikitin, S.V.; Lankin, V.S.; Kulikov, A.V.; Trapezov, O.V. Application of 3-D imaging sensor for tracking minipigs in the open field test. J. Neurosci. Methods 2014, 235, 219–225. [Google Scholar] [CrossRef] [PubMed]
  40. Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. 2013, 32, 1–13. [Google Scholar] [CrossRef]
  41. Hu, W.; Zhang, C.; Jiang, Y.; Huang, C.; Liu, Q.; Xiong, L.; Yang, W.; Chen, F. Nondestructive 3D image analysis pipeline to extract rice grain traits using X-ray computed tomography. Plant Phenomics 2020, 3, 3414926. [Google Scholar] [CrossRef]
  42. Yalçın, İ.; Özarslan, C.; Akbaş, T. Physical properties of pea (Pisum sativum) seed. J. Food Eng. 2007, 79, 731–735. [Google Scholar] [CrossRef]
  43. Tateyama, T.; Foruzan, H.; Chen, Y. 2D-PCA based statistical shape model from few medical samples. In Proceedings of the 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 1266–1269. [Google Scholar] [CrossRef]
Figure 1. Flowchart for automatic measurement of seed geometric parameters based on handheld scanners.
Figure 1. Flowchart for automatic measurement of seed geometric parameters based on handheld scanners.
Sensors 24 06117 g001
Figure 2. Data acquisition: (a) process of data scanning; (b) details of obtained point clouds monitoring in real time with rendering visualization; (c) details of peanut scanning (red laser crosses are laser beams, and the white points are marker points); (d) one sample of the obtained original peanut point cloud, and (e) the obtained point clouds of peanuts (filtering 50% for effective visualization).
Figure 2. Data acquisition: (a) process of data scanning; (b) details of obtained point clouds monitoring in real time with rendering visualization; (c) details of peanut scanning (red laser crosses are laser beams, and the white points are marker points); (d) one sample of the obtained original peanut point cloud, and (e) the obtained point clouds of peanuts (filtering 50% for effective visualization).
Sensors 24 06117 g002
Figure 3. The point cloud processing: (a) the scanned point cloud of peanuts; (b) the preserved point clouds after RANSAC plane detection; (c) the clusters of point clouds after region growing segmentation; (d) the single-seed segmentation result; (e) details of the scanned point clouds in blue box area in (b); (f) details of the single-seed segmentation result in red box area in (d), and (g,h) the single peanut seed point cloud in the world coordinate system before and after poses normalization.
Figure 3. The point cloud processing: (a) the scanned point cloud of peanuts; (b) the preserved point clouds after RANSAC plane detection; (c) the clusters of point clouds after region growing segmentation; (d) the single-seed segmentation result; (e) details of the scanned point clouds in blue box area in (b); (f) details of the single-seed segmentation result in red box area in (d), and (g,h) the single peanut seed point cloud in the world coordinate system before and after poses normalization.
Sensors 24 06117 g003
Figure 4. Seed longitudinal profile (YOZ) contour fitted by: (a) B-spline curve; (b) circle, and (c) least-squares ellipse.
Figure 4. Seed longitudinal profile (YOZ) contour fitted by: (a) B-spline curve; (b) circle, and (c) least-squares ellipse.
Sensors 24 06117 g004
Figure 5. The point cloud processing: (a) a series of sliced point clouds; (b) one example of the sliced profile point cloud; (c) the fitted ellipse (in blue) based on the sliced point cloud; (d) the filled complete profile point cloud; (eg) one example of the reconstructed peanut point cloud in three view angles, where the red point cloud is the incomplete scanned point cloud and the blue one is the completed point cloud using our proposed ellipse fitting-based point cloud completion method; (h) the filtered single peanut seed point cloud, and (i,j) the triangle mesh and surface visualization of the peanut’s 3D model built by the Poisson surface reconstruction method.
Figure 5. The point cloud processing: (a) a series of sliced point clouds; (b) one example of the sliced profile point cloud; (c) the fitted ellipse (in blue) based on the sliced point cloud; (d) the filled complete profile point cloud; (eg) one example of the reconstructed peanut point cloud in three view angles, where the red point cloud is the incomplete scanned point cloud and the blue one is the completed point cloud using our proposed ellipse fitting-based point cloud completion method; (h) the filtered single peanut seed point cloud, and (i,j) the triangle mesh and surface visualization of the peanut’s 3D model built by the Poisson surface reconstruction method.
Sensors 24 06117 g005
Figure 6. Visualization of size-related phenotypes of a peanut: (a) triangulated Poisson mesh, an AABB box of the single peanut 3D model; (b) a peanut dividing by three perpendicular principal component profiles (the first one in margarine, second one in yellow, and third one in green), and (ce) the first, second, and third profiles.
Figure 6. Visualization of size-related phenotypes of a peanut: (a) triangulated Poisson mesh, an AABB box of the single peanut 3D model; (b) a peanut dividing by three perpendicular principal component profiles (the first one in margarine, second one in yellow, and third one in green), and (ce) the first, second, and third profiles.
Sensors 24 06117 g006
Figure 7. 3D point cloud completion results: (a) original scanned point clouds; (b) completed point clouds, and (c) ground truth point clouds.
Figure 7. 3D point cloud completion results: (a) original scanned point clouds; (b) completed point clouds, and (c) ground truth point clouds.
Sensors 24 06117 g007
Figure 8. 3D point cloud completion errors.
Figure 8. 3D point cloud completion errors.
Sensors 24 06117 g008
Figure 9. Weight comparisons among 8 types of seeds in the size-related statistical model.
Figure 9. Weight comparisons among 8 types of seeds in the size-related statistical model.
Sensors 24 06117 g009
Figure 10. Weight comparisons among 8 types of seeds in the shape-related statistical model.
Figure 10. Weight comparisons among 8 types of seeds in the shape-related statistical model.
Sensors 24 06117 g010
Figure 11. Ellipse fitting based on (a) the original scanned point cloud; (b) the point cloud with half the data missing; (c) the point cloud with three-fourths of the data missing.
Figure 11. Ellipse fitting based on (a) the original scanned point cloud; (b) the point cloud with half the data missing; (c) the point cloud with three-fourths of the data missing.
Sensors 24 06117 g011
Figure 12. Comparisons of 3D models reconstructed by: (a) screened Poisson surface reconstruction; (b) symmetry-based 3D reconstruction method; (c) our proposed method based on the incomplete scanning point cloud, and (d) commercial software (Geomagic Studio) based on the artificially obtained complete scanning point cloud.
Figure 12. Comparisons of 3D models reconstructed by: (a) screened Poisson surface reconstruction; (b) symmetry-based 3D reconstruction method; (c) our proposed method based on the incomplete scanning point cloud, and (d) commercial software (Geomagic Studio) based on the artificially obtained complete scanning point cloud.
Sensors 24 06117 g012
Table 1. Phenotypes. NO.: the order of Phenotype. Sym.: symbols of Phenotype. Var.: variables.
Table 1. Phenotypes. NO.: the order of Phenotype. Sym.: symbols of Phenotype. Var.: variables.
NO.TraitsSym.Var.
Size-related phenotypes1VolumeVX01
2Surface areaSX02
3LengthLX03
4WidthWX04
5ThicknessTX05
6Horizontal cross-section perimeterC1X06
7Transverse cross-section perimeterC2X07
8Longitudinal cross-section perimeterC3X08
9Horizontal cross-section areaA1X09
10Transverse cross-section areaA2X010
11Longitudinal cross-section areaA3X011
Shape-related phenotypes12Radius ratioRRX1
13Geometric meanD = (LWT)1/3X2
14RoundnessR = L/(WT)1/2X3
15Needle degreeND = L/WX4
16FlatnessF = T/WX5
17Shape factorSF = TL/W2X6
18SphericitySP = (WT/L2)1/3X7
19Elongation 1E1 = abs((LW)/L)X8
20Elongation 2E2 = abs((LT)/L)X9
21Elongation 3E3 = abs((TW)/W)X10
22Circularity 1CR1 = C12/4πA1X11
23Circularity 2CR2 = C22/4πA2X12
24Circularity 2CR3 = C32/4πA3X13
25Compactness 1CP1 = 16A1/C12X14
26Compactness 2CP2 = 16A2/C22X15
27Compactness 3CP3 = 16A3/C32X16
28Bounding rectangle 1 BR1 = A1/LWX17
29Bounding rectangle 2BR2 = A2/LTX18
30Bounding rectangle 3BR3 = A3/WTX19
31Bounding rectangle to perimeter 1 BP1 = C1/2(L + W)X20
32Bounding rectangle to perimeter 2BP2 = C2/2(L + T)X21
33Bounding rectangle to perimeter 3BP3 = C3/2(T + W)X22
Table 2. Scanning and segmentation accuracies.
Table 2. Scanning and segmentation accuracies.
N1N2N3Scanning Accuracy (R_scan)Segmentation Accuracy (R_seg1)
Broad bean300300300100.00%100.00%
Peanut300300300100.00%100.00%
Pinto bean30029929999.67%100.00%
Soybean500500500100.00%100.00%
pea500500500100.00%100.00%
Black bean500500500100.00%100.00%
Red bean50049749799.40%100.00%
Green bean500500500100.00%100.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Zhu, F.; Wang, X.; Zhang, B. Automatic Measurement of Seed Geometric Parameters Using a Handheld Scanner. Sensors 2024, 24, 6117. https://doi.org/10.3390/s24186117

AMA Style

Huang X, Zhu F, Wang X, Zhang B. Automatic Measurement of Seed Geometric Parameters Using a Handheld Scanner. Sensors. 2024; 24(18):6117. https://doi.org/10.3390/s24186117

Chicago/Turabian Style

Huang, Xia, Fengbo Zhu, Xiqi Wang, and Bo Zhang. 2024. "Automatic Measurement of Seed Geometric Parameters Using a Handheld Scanner" Sensors 24, no. 18: 6117. https://doi.org/10.3390/s24186117

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop