Next Article in Journal
Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions
Next Article in Special Issue
A Systematic Review on Deep Learning with CNNs Applied to Surface Defect Detection
Previous Article in Journal
2-[18F]FDG-PET/CT in Cancer of Unknown Primary Tumor—A Retrospective Register-Based Cohort Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration

Concordia Institute for Information Systems Engineering, Concordia University, 1515 St. Catherine Street West, Montreal, QC H3G 2W1, Canada
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(9), 179; https://doi.org/10.3390/jimaging9090179
Submission received: 31 July 2023 / Revised: 24 August 2023 / Accepted: 28 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue Feature Papers in Section AI in Imaging)

Abstract

:
In this paper, a weighted multivariate generalized Gaussian mixture model combined with stochastic optimization is proposed for point cloud registration. The mixture model parameters of the target scene and the scene to be registered are updated iteratively by the fixed point method under the framework of the EM algorithm, and the number of components is determined based on the minimum message length criterion (MML). The KL divergence between these two mixture models is utilized as the loss function for stochastic optimization to find the optimal parameters of the transformation model. The self-built point clouds are used to evaluate the performance of the proposed algorithm on rigid registration. Experiments demonstrate that the algorithm dramatically reduces the impact of noise and outliers and effectively extracts the key features of the data-intensive regions.

1. Introduction

The purpose of point cloud registration is to extract the key points or features corresponding to the target point set and the point set to be registered as well as find the transformation mapping relationship between two point sets [1,2,3,4,5]. This task involving image processing, data analysis and computer vision has essential applications in many practical scenarios.
For instance, point cloud registration is essential for driverless technology. Indeed, various hardware sensors, such as lidars, short-wave radars and depth-of-field cameras, could be mounted on the crew-less vehicle and point cloud registration technology can be used to fuse data collected from multiple sensors [6,7,8] to provide fundamental functions such as scene stitching, vehicle positioning [9], and typical scene recognition and matching for vehicle control strategies. For example, the authors in [10] proposed a framework used for unmanned vehicles based on end-to-end point cloud registration deep networks. They obtained the corresponding relationship by learned matching probabilities (LMP) among a group of candidate points related to static characteristics instead of using existing points. Point cloud registration is also applied in medical imaging [11,12]. Indeed, to facilitate the diagnosis of the disease, several medical images from different instruments, such as Positron Emission Tomography (PET), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), need to be combined [13]. For example, authors in [14] improved the popular iterative Closest Point (ICP) algorithm by combining 3D scale-invariant feature transform to register 3D free-form closed surfaces (human skull models). In another work, the authors in [15] used the Gaussian mixture model (GMM) with a semi-supervised EM algorithm and geometric constraints to achieve retinal image registration. Moreover, 3D reconstruction also makes extensive use of point cloud registration technology [16,17]. For large buildings, for example, general scanning equipment cannot complete the whole scanning process at one time because the scanning range is limited. It requires scanning multiple parts and then splicing point clouds together [18]. In other cases, the objects to be observed may be dynamic or have complex surface characteristics. The accuracy of these features plays a crucial role in modelling and analysis; repeated scanning to build a fusion model can improve details [19].
Considering the point set’s acquisition perspective, noise and outliers generated in the acquisition process, as well as the deformation and missing parts of the point set caused by other factors, point cloud registration is a challenging task [1]. Various methods are proposed to enhance the robustness and accuracy of point cloud registration. In terms of pairwise registration, considering only two point sets, there are three main registration categories: distance-based methods including ICP [20], Graph Matching (GM) [21], filter-based methods and probability-based methods [22].
However, most point-to-point methods are prone to fall into local optima, especially if there are some similar point structure blocks in the point set. To improve this situation, registration based on mixed models (most are GMM-based) has proven effective [22,23,24,25]. The core idea is to model and describe the probability distribution of the point set using a parameterized mixed model and find the closest response of the mixture model to determine the corresponding relationship between point sets. These models perform well even if the sampling rate of two point clouds is not the same.
Nonetheless, there are two evident deficiencies when using the GMM. First, the GMM cannot effectively describe certain non-Gaussian distributions, such as the typical peak-trailing distributions in signal processing [26,27,28,29]. In the point cloud, intuitively speaking, space with dense data will carry more information. These high-density areas may represent the crucial feature structures in the point cloud, yet the GMM cannot effectively fit these high-density blocks, and its results tend to be relatively average. Secondly, GMM is easy to be disturbed by noise. Different noise levels will result in divergent response parameters in the mixture model, which could compromise the final registration accuracy [30].
The goal of this paper is to propose a point cloud registration method based on the weighted multivariate generalized Gaussian mixture model (WMGGMM) that we develop in this paper to address the difficulties above. The generalized Gaussian distribution (GGD) belongs to the family of elliptic distributions. Due to the addition of a shape parameter, it has a strong ability to describe various data distributions, particularly considering the data peak [31,32]. Its special cases contain Gaussian and Laplace distributions, and therefore it is widely used in feature extraction [33,34,35] and texture retrieval [36,37,38]. The mixture model of GGD has also been applied in several applications such as image processing and segmentation [39,40,41] and human movement recognition [42,43].
We show that the generalized Gaussian mixture model (GGMM) is an alternative worthy choice for point cloud registration when real-time analysis is not required. Although most parameters of GGMM have no closed solutions, its offers high registration accuracy and robustness. In addition, we introduce weights to reinforce the ability to pay attention to dense areas and reduce the influence of noise and outliers on the parameter estimation process. After obtaining the GGMM models for the target scene and the scene to be registered, the approximate Kullback–Leibler divergence (KLD) is computed to measure the models’ difference. This will be used as a loss function to find the optimal registration parameters through the stochastic optimization algorithm.
The paper is organized as follows. Section 2 will tackle the WMGGMM parameter estimation. This section will also give the complete learning algorithm. Section 3 presents some experimental results conducted on a synthetic data set to verify the algorithm’s performance and robustness. It is also devoted to our approach to obtain the optimal registration parameters using stochastic optimization, and the final registration effect performed on rigid transformation is presented. Conclusions and future works are finally reported in Section 4.

2. Weighted Multivariate Generalized Gaussian Mixture Model

In the majority of existing works, the feature independency assumption is used to simplify modeling high-dimensional data, which results in a distribution which is a product of one-dimensional generalized Gaussians [41,42]. Unlike these works, we use here the PDF as defined in [31]:
p ( x ; μ , Σ , β , m ) = Γ d 2 β Γ d 2 β π d 2 2 d 2 β m d 2 | Σ | 1 2 e 1 2 m β ( x μ ) T Σ 1 ( x μ ) β
where Γ ( · ) denotes the Gamma function; x , μ R d and μ are the mean vectors; Σ is a d × d real symmetrical positive definite matrix called the scatter matrix; and m and β > 0 are the scale and shape parameters of the MGGD. The shape parameter controls the peak’s sharpness and tail extension in the probability density function. It is worth noting that when β = 1 , the MGGD becomes the multivariate Gaussian distribution. If β < 1 , the distribution will have a sharper peak and a larger tail. However, when β tends to infinity, MGGD is close to a multivariate uniform distribution [31].

2.1. Data Weighting

Recent research has shown that data weighting could improve modeling capabilities (see, for example, [44] and references therein). Here, we follow these works via an extension to GGMM-based modeling. As proposed in [44], for example, each sample obtains a corresponding weight w greater than 0. If we consider the likelihood for a specific sample x, we can write it as p ( x ; μ , Σ , β , m ) w . This is not a probability distribution because the integral is not equal to one. However, we notice that p ( x ; μ , Σ , β , m ) w p ( x ; μ , Σ , β , m w 1 / β ) , and thus we can then obtain the PDF with weight as a parameter:
p ^ ( x ; θ , w ) = p ( x ; μ , Σ , β , m w 1 / β )
where θ = { μ , Σ , β , m } . It can be seen that individual weight directly influences m in the PDF here. Still, in parameter estimation, the weights will simultaneously affect the scale and shape in the final mixture. Having the new distribution in Equation (2) in hand, we can obtain the K-component mixture:
p ˜ ( x ; Θ , w ) = k = 1 K π k p ( x ; μ k , Σ k , β k , m k w 1 / β k )
where Θ = { π 1 , . . . , π K , θ 1 , . . . , θ K } denotes the parameter set of the model, and ( π 1 , . . . , π K ) are the mixing coefficients which satisfy π k > 0 and k = 1 K π k = 1 . θ k = ( μ k , Σ k , β k , m k ) are the parameters of the k t h component. Let X = { x 1 , . . . , x N } represent the whole data set and W = { w 1 , . . . , w N } is the weight set, then the log-likelihood function is given by:
L ( X | Θ , W ) = i = 1 N ln k = 1 K π k p ( x i ; μ k , Σ k , β k , m k w i 1 / β k )

2.2. MGGD with Fixed Weights

For maximum likelihood estimation, the missing variables Z = { z 1 , . . . , z N } are introduced. If x i is generated by the kth component, then z i = k . We assume that the weights are already known by prior knowledge, so the expected complete data log-likelihood (Q function) can be written as:
Q c Θ , Θ ( r ) = E p Z X ; W , Q ( r ) [ ln P ( X , Z ; W , Θ ) ] = Θ i = 1 N k = 1 K η i k [ ln π k + ln β k + d 2 β k ln w i ln Γ d 2 β k d 2 β k ln 2 d 2 ln m k 1 2 ln Σ k w i x i μ k T Σ k 1 x i μ k β k 2 m k β k ]
where E P [ · ] is the expectation with respect to the distribution P and = Θ represents items only related to Θ . Subsequently, the optimal parameters Θ * can be obtained by the EM algorithm. In the expectation step, the posteriors are updated with:
η i k = p z i = k x i ; Θ , w i = p ^ x i ; θ k , w i k = 1 K π k p ^ x i ; θ k , w i
By taking the derivatives of the complete data log-likelihood with respect to the parameters and making the results equal to zero, we can obtain the parameter update formulas:
π k = 1 N i = 1 N η i k
m k = β k d i = 1 N η i k i = 1 N η i k w i x i μ k T Σ k 1 x i μ k β k 1 β k
μ k = i = 1 N η i k w i x i μ k T Σ k 1 x i μ k β k 1 x i i = 1 N η i k w i x i μ k T Σ k 1 x i μ k β k 1
Σ k = β k m k β k i = 1 N η i k i = 1 N η i k w i x i μ k x i μ k T x i μ k T Σ k 1 x i μ k 1 β k
After replacing m k in Equation (10) with the previous result in Equation (8), we can find that Σ k is the independent form m k . If we let y i = ( x i μ k ) T Σ k 1 ( x i μ k ) , we have:
Σ k = d i = 1 N η i k w i y i β k i = 1 N η i k w i x i μ k x i μ k T y i 1 β k
Furthermore, it is worth noting that μ k and Σ k do not have closed solutions; they are both solved through the fixed point (FP) method. According to Banach’s fixed point theorem [45], if ( S , d ) is a non-empty complete metric space with a contraction mapping T : S S , then there exists a unique fixed point S * in S. The authors in [31] proved the convergence of Σ k and explained the existence and uniqueness of the fixed point, based on the fact that β ( 0 , 1 ] and Σ is a positive definite real symmetric matrix. This is also consistent with our assumptions about β . We introduce the Frobenius norm defined in Equation (12) to measure the difference between S n and S n 1 in the fixed point iteration. The process stops when the approximate solution satisfies the preset precision. Furthermore, to ensure the convergence of the FP equation, the value range of weights should also be between zero and one.
S n S n 1 F = i = 1 m j = 1 n s n ij s n 1 i j 2
Then parameter β can be estimated using Newton–Raphson iterations [42,43,44]:
β k ( t + 1 ) = β k ( t ) ξ f β k ( t ) f β k ( t )
where ξ is the learning rate used to prevent the oscillation and overflow in the iterative process, and it is usually around 0.1. If necessary, the method of exponential decay can be applied to make the convergence more stable. f ( β k ( t ) ) and f ( β k ( t ) ) are given as follows:
f β k = d i = 1 N η i k 2 i = 1 N η i k w i y i β k i = 1 N η i k w i y i β k ln y i + d i = 1 N η i k ln w i 2 β k d i = 1 N η i k [ 2 β k Ψ d 2 β k + ln 2 i = 1 N η i k d i = 1 N η i k 2 β k ln β k d i = 1 N η i k i = 1 N η i k w i y i β k
f β k = d i = 1 N η i k 2 i = 1 N η i k w i y i β k 2 [ i = 1 N η i k w i y i β k i = 1 N η i k w i y i β k ln y i 2 i = 1 N η i k w i y i β k ln y i 2 ] d i = 1 N η i k ln w i 2 β k 2 + d i = 1 N η i k 2 β k 2 Ψ d 2 β k + ln 2 + d 2 i = 1 N η i k 4 β k 3 Ψ d 2 β k + d i = 1 N η i k 2 β k 2 [ ln β k d N + ln i = 1 N η i k w i y i β k 1 β k i = 1 N η i k w i y i β k ln y i i = 1 N η i k w i y i β k ]
where Ψ ( · ) is the digamma function.

2.3. Weights Considered as Random Variables

Above, we have derived the WMGGMM parameter updating formulas with fixed weights. However, it is pointed out in [44] that the Bayesian formalism is more inclined to treat parameters as random variables and update the posterior of parameters by combining the parameters prior with the observed sample. Under this framework, the limitation of insufficient prior knowledge on the accuracy of weights is reduced, and the inference process of weights will also be more explanatory. As mentioned before, the generalized Gaussian distribution belongs to the family of elliptic distributions, and thus we select the same prior distribution, i.e., Gamma distribution, as in [44]. At this point, the prior and posterior of the parameters have the same form. The advantage is that when we make a new observation, we do not have to re-calculate the whole process but only directly obtain the posterior distribution through the parameters, which undoubtedly simplifies the updating process of weight parameters. Then, the posterior will become the prior in the next calculation, Therefore, we can obtain:
p ( w ; ϕ ) = G ( w ; a , b ) = Γ ( a ) 1 b a w a 1 e b w
where G ( w ; a , b ) is the Gamma distribution, and ϕ = { a , b } are the parameters of the prior distribution of w. The mean and variance of random variable w are given by:
E [ w ] = a / b
V a r [ w ] = a / b 2
Due to the addition of prior parameters, the log-likelihood of the complete data becomes the following form:
Q c Θ , Θ ( r ) = E p Z , W X ; Θ ( r ) , Φ [ ln P ( X , Z , W ; Θ , Φ ) ]
where Φ = { ϕ 1 , . . . , ϕ N } and ϕ i = { a i , b i } . The posterior distribution factorizes on i as follows:
P Z , W X ; Θ ( r ) , Φ = i = 1 N p z i , w i x i ; Θ ( r ) , ϕ i
Each of these product terms can be expressed as two-factor expressions:
p z i , w i x i ; Θ ( r ) , ϕ i = p w i z i , x i ; Θ ( r ) , ϕ i p z i x i ; Θ ( r ) , ϕ i
According to the above formula, the expectation step in the EM algorithm is divided into two parts (E-Z step and E-W step). In the E-Z step, the marginal posterior distribution of z i is obtained by integrating over w i :
η i k = p z i = k , w i x i ; Θ ( r ) , ϕ i d w i k π k p x i z i = k , w i ; Θ ( r ) p w i ; ϕ i d w i = π k p ^ x i ; θ k , w i G w i ; a i , b i d w i π k p ¯ x i ; μ k , Σ k , β k , m k , a i , b i
where p ¯ x ; μ , Σ , β , m , a , b is given as:
p ¯ ( x ; μ , Σ , β , m , a , b ) = β Γ d 2 Γ a + d 2 β ( m π ) d 2 ( 2 b ) d 2 β | Σ | 1 2 Γ ( a ) Γ d 2 β ( x μ ) T Σ 1 ( x μ ) β 2 b m β + 1 a + d 2 β
In step E-W, according to the property of conjugate distribution, we can obtain:
p w i z i = k , x i ; Θ , ϕ i w i p x i z i = k , w i ; Θ p w i ; ϕ i = p ^ x i ; θ k , w i G w i ; a i , b i = G w i ; a i ( r + 1 ) , b i ( r + 1 )
Thus, the updating formulas of prior parameters can be obtained:
a i k ( r + 1 ) = a i ( 0 ) + d 2 β k
b i k ( r + 1 ) = b i ( 0 ) + ( x i μ k ) T Σ k 1 ( x i μ k ) β k 2 m k β k
w ¯ i k = E P ( w i | z i = k , x i , Θ ( r ) , ϕ i ) [ w i ] = a i k ( r + 1 ) b i k ( r + 1 )
This explains the outliers shielding feature of the weighted algorithm. As an outlier is by definition far from the center of all components, it has a low posterior weight w ¯ i k for each component and then a low mean posterior probability w ¯ i . By expanding Equation (19), we can obtain a result similar to the Q function with fixed weights:
Q c Θ , Θ ( r ) = i = 1 N k = 1 K w i η i k ln π k p ^ x i ; θ k , w i × p w i x i , z i = k , Θ ( r ) , ϕ i d w i = Θ i = 1 N k = 1 K η i k [ ln π k + ln β k + d 2 β k ln w ¯ i k ( r + 1 ) ln Γ d 2 β k d 2 β k ln 2 d 2 ln m k 1 2 ln Σ k w ¯ i k ( r + 1 ) x i μ k T Σ k 1 x i μ k β k 2 m k β k ]
Therefore, w i will be replaced with w ¯ i k in all the parameter updates formulas of the mixture model. Since the equations are very similar, they are not repeated here.

2.4. Automatic Determination of the Number of Components

The model selection problem is tackled using the minimum message length (MML) criterion as proposed in [46]:
Θ M M L = arg min Θ { log P ( Θ ) Q R Θ , Θ ( r + 1 ) + 1 2 log I c ( Θ ) + D ( Θ ) 2 1 + log 1 12 }
where I C ( Θ ) denotes the expected complete Fisher information matrix (FIM) and D ( Θ ) is the dimensionality of the model. Using a similar process as in [46], we can show that
Θ M M L = arg min Θ { M 2 k K + log π k Q R Θ , Θ ( r + 1 ) + K + ( M + 1 ) 2 1 + log n 12 }
where K + is the set of non-empty components and K + is the number of elements in it. Moreover, we can rewrite the formula for calculating π k in the maximization step of the EM algorithm:
π k = max 0 , i = 1 N η i k M K + / 2 l = 1 K max 0 , i = 1 N η i l M K + / 2
The threshold for minimum support is high when the number of non-empty components is large at the beginning. Some components can be removed quickly. In the process of updating components one by one, the threshold gradually approaches the situation in [46] as the number of non-empty components decreases.

2.5. Complete Algorithm

The complete steps are outlined in Algorithm 1. We use k-means to initialize π k , μ k , Σ k . The initial parameter β k is specified as 0.5, and the parameter m k is calculated according to the pre-clustering results and the formula in [31]:
m k = β k d N k i = 1 N k x i μ k T Σ k 1 x i μ k β k 1 β k
where x i X k is the ith sample for the kth cluster, and N k is the number of samples.
We adopt the same data similarity measurement method based on the Gaussian kernel as in [44] for the initialization of weights. However, due to the constraint of the weight range, we modify it as follows:
w i = 1 q j S i q e x p d 2 ( x i , x j ) σ
where d 2 ( x i , x j ) denotes the Euclidean distance, S i q is the set containing q nearest neighbors of x i , and σ is a positive scale. The default setting for q is 20. We can then calculate the initialization of the prior parameters of the weight through Equations (17) and (18), i.e., a i = w i 2 and b i = w i .
Figure 1 shows the impact of different σ on the weights; its significance is to change the degree of differentiation of the weights. When there is a small σ , the difference in the weights between dense and sparse areas becomes more pronounced. Conversely, the distribution of weights will be relatively close. In other words, a smaller σ is more beneficial at removing outliers and noise. However, it is important to note that when the σ is too small, the weight operation is equivalent to removing most of the points that are away from the clustering center in the data distribution. The actual points involved in parameter identification are reduced, leading to inadequate support of the components. Furthermore, the resulting mixed model parameters will have a large deviation from the original data distribution. Therefore, the selection of σ is a balance between model robustness and model accuracy. We set it to 25 in our case.
Algorithm 1 Proposed WMGGMM algorithm with component-wise EM procedure.
Input:  X = { x i } i = 1 N ; Φ ( 0 ) = { a i ( 0 ) , b i ( 0 ) } i = 1 N ; K m a x
Θ ( 0 ) = { π k ( 0 ) , μ k ( 0 ) , Σ k ( 0 ) , β k ( 0 ) , m k ( 0 ) } k = 1 K m a x ;
Output: Optimal mixture model parameters: Θ *
 Set: r = 0 , M M L = +
repeat 
     for  k = 1 To K m a x  do
            E-Z step using (22):
             η i k ( r + 1 ) = π k ( r ) p ¯ ( x i ; μ k ( r ) , Σ k ( r ) , β k ( r ) , m k ( r ) , a i k ( r ) , b i k ( r ) ) l = 1 k m a x π l ( r ) p ¯ ( x i ; μ l ( r ) , Σ l ( r ) , β l ( r ) , m l ( r ) , a i l ( r ) , b i l ( r ) )
            Compute the # of non-empty components: K +
            E-W step using (25)–(27):
                  a i k ( r + 1 ) = a i ( 0 ) + d 2 β k
                  b i k ( r + 1 ) = b i ( 0 ) + [ ( x i μ k ) T Σ k 1 ( x i μ k ) ] β k 2 m k β k
                  w ¯ i k = a i k ( r + 1 ) / b i k ( r + 1 )
             M step:
                  π k ( r + 1 ) = m a x ( 0 , η i k ( r + 1 ) K + M / 2 ) l = 1 K m a x m a x ( 0 , η i l ( r + 1 ) K + M / 2 )
             if  π k ( r + 1 ) > 0  then
                 Update μ k using (9):
                  repeat
                       μ k n e w = T ( μ k o l d )
                  until  μ k n e w μ k o l d F < ϵ
                   μ k ( r + 1 ) = μ k n e w
                  Update Σ k using (11):
                  repeat
                       Σ k n e w = T ( Σ k o l d )
                  until  Σ k n e w Σ k o l d F < ϵ
                   Σ k ( r + 1 ) = Σ k n e w
                  Update β k using (13)–(15):
                  repeat
                       β k n e w = β k o l d ζ f ( β k o l d ) f ( β k o l d )
                  until  | μ k n e w μ k o l d | < ϵ
                    β k ( r + 1 ) = β k n e w
                  Update m k ( r + 1 ) using (8)
            end if
        end for
        Compute M M L ( r + 1 ) using (30)
         r = r + 1
    until  | Δ M M L ( r ) | < ϵ
    Return the parameters Θ of non-empty components as optimal mixture model parameters Θ *

3. Experimental Results

3.1. Synthetic Data

First, we demonstrate the performance of the WMGGMM model through synthetic data. The method for generating data points that follow an MGGD comes from [31]:
x R d = ˜ τ C 1 / 2 u
where x is a random vector that follows an MGGD with scatter matrix C = m Σ and shape parameter β , and = ˜ denotes equivalence on the distribution; x is a random vector uniformly distributed on a unit sphere and τ is a positive scalar random variable that satisfies τ 2 β Γ d / ( 2 β ) , 2 .
Table 1 and Table 2 show the parameter estimation results for two generated two-dimensional data sets from the 4- and 3-component mixture models, respectively, where ρ k = c o v ( X , Y ) / v a r ( X ) v a r ( Y ) represents the correlation coefficient used to measure the slope of the scatter matrix.
We can see that the estimated parameters are close to the real ones. According to the correlation coefficient comparisons, the slopes of the real and estimated scatter matrices are also close. Due to the role of weight, the points essentially involved in parameter estimation are concentrated in data-intensive areas, leading to a reduced scale parameter m. However, the change in parameter β is not necessarily a decrease. Suppose that β is small in the default setting. In the big trailing part, the data points are distributed sparsely. The weight operation will remove most of these points, so the shape of data is changed, and the final estimated β will become larger instead.
Figure 2 presents the parameter estimation results at different noise levels. Parameter estimation is carried out after the proportional addition of random uniform noise in the primary distribution area ([−5.25, 5.25]) of the original data. When σ is appropriately selected, the weights remove most of the noise and outliers, and the actual points involved in parameter estimation are mostly in the desired original data distribution. The final results show that the mixture models of the three different cases are very similar, which effectively proves the WMGGMM algorithm’s robustness.
The results of the mixture model for overlapping data are shown in Figure 3. The introduction of shape parameter β enables MGGD to improve the identification ability of data distribution peaks. Although the overall distribution of two components overlap, the WMGGMM algorithm can still accurately identify the parameters of each component when their centers do not coincide.
Figure 4 displays the execution time of the WMGGMM algorithm under different conditions (the default number of components is four). The average value of five independent running times is taken as the final result, and the unit is seconds. Because most of the algorithm’s parameters need to be estimated iteratively (fixed-point equations and Newton–Raphson method), the algorithm’s running time is much higher than that of the standard EM algorithm. The anti-interference performance of the model is improved by sacrificing some efficiency. The time here can only be used as a relative reference since the algorithm’s run time is influenced by computer performance, hardware and software acceleration, and randomness of parameter initialization.

3.2. Point Cloud Registration Using WMGGMM

We consider the approach proposed in [23], in the case of GMM, to perform WMGGMM-based registration. Two mixture models are adopted to describe the target scene and the scene to be registered. Then, the difference between these two models is applied to update the registration parameters. Nevertheless, the algorithm proposed in [23] is still a method based on point to point in essence. The data pre-clustering is not considered, but each point in the data set is used as a mean to initialize a mixture component. This approach is equivalent to converting the entire data set into a mixture model with multiple simple components. However, this method of initializing the mixture model is not suitable in the point cloud with a large data volume. The subsequent L2 divergence simplifies the derivation based on the transformation model. However, the derivative optimization process is closely related to the expression of the specific model. If the transformation model is changed, all optimization processes need to be re-derived. Therefore, we propose a method based on WMGGMM to pre-cluster the target scene and the scene to be registered and extract the key features to form the mixture models. Then, the optimal registration parameters are found by using stochastic optimization through the KLD between the mixture models. The KLD of the two statistical models is defined as follows:
K L D ( f | | g ) = x f ( x ) log f ( x ) g ( x ) d x
Unfortunately, the KLD between the mixture models has no analytical expression, so several approximations have been proposed [47,48,49]. Compared with GMM, the KLD of the generalized Gaussian mixture model using the inner product of the components is too complicated due to the introduction of shape parameters. Even though a method for calculating the KLD of two MGGDs is proposed in [38], the matched bound and variational approximations are also unfeasible because the premise of this approach is that all MGGDs have a zero mean. However, the mean value of components is significant in the point cloud registration scenario. Therefore, we finally use Monte Carlo sampling to calculate the KLD between two GGMMs:
K L D M C ( P S | | P M ) = 1 n i = 1 n log P S ( x i ) log P M ( x i )
where P S ( x ) and P M ( x ) are the probability density functions of the GGMM obtained from the target scene and the scene to be registered, respectively, and { x i } i = 1 n denotes n samples taken from P S ( x ) . In the above formula, the sum of the probability of samples replaces the integral, and the Monte Carlo method is the only method that can approximate the actual value of KLD when there are enough sample points. The Gibbs sampling, one of the Markov Chain Monte Carlo (MCMC) sampling techniques, is applied in this process using n = 1000 .
We assume that the point cloud transformation model is:
X = f T ( X 0 , Ω )
where X 0 and X are the point sets before and after the transformation, and Ω = { ω 1 , . . . , ω m } is the parameter set of the transformation model. Then, we can express the point cloud registration problem in the following form:
Ω * = arg min Ω K L D M C ggmm ( S ) ggmm f T ( M , Ω )
where S is the target scene point set, M is the point set to be registered, and g g m m ( · ) denotes the obtained PDF of the GGMM from a data set using the WMGGMM algorithm. The complete point cloud registration algorithm using WMGGMM is shown in Algorithm 2. However, it is only a general framework and does not specify a concrete random optimization method. The stochastic optimization technique can be selected depending on needs, as long as the definition of the loss function in (38) is satisfied, and the optimization domain is Ω . The simulated annealing (SA) algorithm is used in this paper.
Algorithm 2 Point cloud registration algorithm based on WMGGMM and stochastic optimization.
Input: 
Scene set: S, Model set: M, Initial transformation parameter set: Ω ( 0 )
Output: 
Optimal transformation parameter set: Ω *
    
Set: K L D o l d = 0 , K L D n e w = 0 , Ω o l d = Ω ( 0 )
    
g g m m s = G G M M ( S )
    
Gibbs sampling on g g m m s to get: X s a m p l e = { x i } i = 1 1000
    
repeat
    
     g g m m m o l d = g g m m ( f T ( M , Ω o l d ) )
    
     K L D o l d = K L D M C ( g g m m s | | g g m m m o l d )
    
     Ω n e w = R a n d o m G e n e r a t i o n ( Ω o l d )
    
     g g m m m = g g m m ( f T ( M , Ω n e w ) )
    
     K L D n e w = K L D M C ( g g m m s | | g g m m m n e w )
    
    if  K L D n e w < K L D o l d  then
    
        Ω o l d = Ω n e w
    
    end if
     
until some stopping criterion is satisfied
    
 Return Ω * = Ω o l d
When the transformation model is rigid, i.e., X = R α X 0 + t , the optimization process can be further simplified, where R α is the rotation matrix with an angle of α and t is a translation vector. The transformation of the data set is equivalent to the rigid change of the corresponding mixture model. We can obtain μ k = R α μ k + t and Σ k = R α T Σ k R α ; the shape parameter β and scale parameter m are not affected during the transformation. In other words, we only calculate the mixture models of the target scene and the scene to be registered through WMGGMM at the beginning, and the parameter estimation does not need to be repeated during optimization. The transformation models used in the experiments below are all rigid transformations and all test data are generated based on the method in the previous section.
The unlikeness in sampling rate is a common challenge in point cloud registration. In general, the original scene will have a higher sampling rate to achieve more accurate modeling. In comparison, the scene to be registered may have a lower sampling rate due to the limitations of the sampling environment and tasks. Figure 5 shows the registration results of point sets with different sampling rates. The target scene (blue) has 1200 samples, while the scene to be registered (red) has 800. Although the sampling rate varies, the two scenes’ data meet similar statistical distributions, so the method based on GGMM can still effectively register the two sets with rotational and translational errors of 1.5% and 4.3%, respectively.
The registration results at different noise levels are given in Figure 6. The outcomes show that our method can effectively remove the noise and outliers’ interference and extract the main features of the point cloud for matching, and the algorithm has good robustness with average rotational and translational errors of 3.5% and 6.3%, respectively.
In Figure 7, we designed a point cloud shaped like the Chinese character “zhi” with a symmetrical approximate center (rotational and translational errors of 6.1% and 9.3%, respectively). The distinction between it and the original graph after 180 degrees of rotation is only at one “point”. Hence, it is straightforward to fall into the optimal local solution in registration optimization. However, in several experiments, only 6% fell into local optima. Compared with derivative optimization, stochastic optimization provides the ability to jump out of the local optimal solution.

4. Conclusions

This paper proposes a weighted multivariate generalized Gaussian mixture model and combines it with the stochastic optimization algorithm for point cloud rigid registration. This method requires enough samples in the registration scene to meet the minimum support of components. It extracts the data’s mass features rather than the edge features (contour and shell), so it is suitable for substantial point clouds with a large data volume. The introduction of weights and the ability of generalized Gaussian distributions to describe peak data can effectively reduce the influence of noise and outliers and obtain the critical features of data-intensive regions. Experimental results attest that the algorithm has sufficient robustness. The stochastic optimization algorithm reduces the coupling between algorithm modules, intensifies the algorithm’s expansibility, and provides a more potent global optimization capability. However, in the mixture model’s parameter estimation process, almost all parameters need to be learned iteratively; therefore, some performance is sacrificed to enhance the algorithm’s accuracy. In addition, this work only considered a 2D scenario. Future work will be committed to improve the parameter estimation approach to improve the algorithm’s performance and extend the approach to 3D scenarios.

Author Contributions

Conceptualization, B.G., F.N. and N.B.; methodology, B.G., F.N. and N.B.; software, B.G.; validation, B.G. and F.N.; resources, N.B.; writing—original draft preparation, B.G; writing—review and editing, F.N. and N.B.; supervision, N.B.; funding acquisition, N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSERC grant number RGPIN-6656-2017.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The completion of this research was made possible thanks to the Natural Sciences and Engineering Research Council of Canada (NSERC).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, H.; Guo, B.; Zou, K.; Li, Y.; Yuen, K.V.; Mihaylova, L.; Leung, H. A review of point set registration: From pairwise registration to groupwise registration. Sensors 2019, 19, 1191. [Google Scholar] [CrossRef] [PubMed]
  2. Hirose, O. A Bayesian Formulation of Coherent Point Drift. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2269–2286. [Google Scholar] [CrossRef] [PubMed]
  3. Hirose, O. Geodesic-Based Bayesian Coherent Point Drift. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 5816–5832. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Z.; Dai, Y.; Sun, J. Deep learning based point cloud registration: An overview. Virtual Real. Intell. Hardw. 2020, 2, 222–246. [Google Scholar] [CrossRef]
  5. Min, Z.; Wang, J.; Pan, J.; Meng, M.Q.H. Generalized 3-D Point Set Registration With Hybrid Mixture Models for Computer-Assisted Orthopedic Surgery: From Isotropic to Anisotropic Positional Error. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1679–1691. [Google Scholar] [CrossRef]
  6. De Silva, V.; Roche, J.; Kondoz, A. Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles. arXiv 2018, arXiv:1710.06230. [Google Scholar]
  7. Giering, M.; Venugopalan, V.; Reddy, K. Multi-modal sensor registration for vehicle perception via deep neural networks. In Proceedings of the 2015 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 15–17 September 2015; pp. 1–6. [Google Scholar]
  8. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
  9. Rosas-Cervantes, V.; Lee, S.G. 3D Localization of a Mobile Robot by Using Monte Carlo Algorithm and 2D Features of 3D Point Cloud. Int. J. Control. Autom. Syst. 2020, 18, 1–11. [Google Scholar] [CrossRef]
  10. Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. Deepvcp: An end-to-end deep neural network for point cloud registration. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 12–21. [Google Scholar]
  11. Rundo, L.; Tangherloni, A.; Militello, C.; Gilardi, M.C.; Mauri, G. Multimodal medical image registration using particle swarm optimization: A review. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  12. Chen, Y.; He, F.; Li, H.; Zhang, D.; Wu, Y. A full migration BBO algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl. Soft Comput. 2020, 93, 106335. [Google Scholar] [CrossRef]
  13. Collignon, A.; Vandermeulen, D.; Suetens, P.; Marchal, G. 3D multi-modality medical image registration using feature space clustering. In Proceedings of the International Conference on Computer Vision, Virtual Reality, and Robotics in Medicine, Nice, France, 3–6 April 1995; Springer: Berlin/Heidelberg, Germany, 1995; pp. 195–204. [Google Scholar]
  14. Sinko, M.; Kamencay, P.; Hudec, R.; Benco, M. 3D registration of the point cloud data using ICP algorithm in medical image analysis. In Proceedings of the 2018 IEEE ELEKTRO, Mikulov, Czech Republic, 21–23 May 2018; pp. 1–6. [Google Scholar]
  15. El-Hakim, S.F.; Beraldin, J.A.; Picard, M.; Godin, G. Detailed 3D reconstruction of large-scale heritage sites with integrated techniques. IEEE Comput. Graph. Appl. 2004, 24, 21–29. [Google Scholar] [CrossRef]
  16. Zhao, X.; Zhang, C.; Wang, Y.; Yang, B. A hybrid approach based on MEP and CSP for contour registration. Appl. Soft Comput. 2011, 11, 5391–5399. [Google Scholar] [CrossRef]
  17. Bermejo, E.; Cordón, O.; Damas, S.; Santamaría, J. Quality time-of-flight range imaging for feature-based registration using bacterial foraging. Appl. Soft Comput. 2013, 13, 3178–3189. [Google Scholar] [CrossRef]
  18. Ma, J.; Jiang, J.; Liu, C.; Li, Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration. Inf. Sci. 2017, 417, 128–142. [Google Scholar] [CrossRef]
  19. Keller, M.; Lefloch, D.; Lambers, M.; Izadi, S.; Weyrich, T.; Kolb, A. Real-time 3D reconstruction in dynamic scenes using point-based fusion. In Proceedings of the 2013 IEEE International Conference on 3D Vision-3DV, Seattle, WA, USA, 29 June–1 July 2013; pp. 1–8. [Google Scholar]
  20. Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
  21. Conte, D.; Foggia, P.; Sansone, C.; Vento, M. Thirty years of graph matching in pattern recognition. Int. J. Pattern Recognit. Artif. Intell. 2004, 18, 265–298. [Google Scholar] [CrossRef]
  22. Gao, W.; Tedrake, R. Filterreg: Robust and efficient probabilistic point-set registration using gaussian filter and twist parameterization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11095–11104. [Google Scholar]
  23. Jian, B.; Vemuri, B.C. Robust point set registration using gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 1633–1645. [Google Scholar] [CrossRef] [PubMed]
  24. Tao, W.; Sun, K. Asymmetrical Gauss mixture models for point sets matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1598–1605. [Google Scholar]
  25. Ravikumar, N.; Gooya, A.; Çimen, S.; Frangi, A.F.; Taylor, Z.A. Group-wise similarity registration of point sets using Student’s t-mixture model for statistical shape models. Med Image Anal. 2018, 44, 156–176. [Google Scholar] [CrossRef] [PubMed]
  26. Bouguila, N.; Fan, W. Mixture Models and Applications; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  27. Sefidpour, A.; Bouguila, N. Spatial color image segmentation based on finite non-Gaussian mixture models. Expert Syst. Appl. 2012, 39, 8993–9001. [Google Scholar] [CrossRef]
  28. Hu, C.; Fan, W.; Du, J.; Bouguila, N. A novel statistical approach for clustering positive data based on finite inverted Beta–Liouville mixture models. Neurocomputing 2019, 333, 110–123. [Google Scholar] [CrossRef]
  29. Bouguila, N.; Elguebaly, T. A fully Bayesian model based on reversible jump MCMC and finite Beta mixtures for clustering. Expert Syst. Appl. 2012, 39, 5946–5959. [Google Scholar] [CrossRef]
  30. Fan, J.; Yang, J.; Ai, D.; Xia, L.; Zhao, Y.; Gao, X.; Wang, Y. Convex hull indexed Gaussian mixture model (CH-GMM) for 3D point set registration. Pattern Recognit. 2016, 59, 126–141. [Google Scholar] [CrossRef]
  31. Pascal, F.; Bombrun, L.; Tourneret, J.Y.; Berthoumieu, Y. Parameter estimation for multivariate generalized Gaussian distributions. IEEE Trans. Signal Process. 2013, 61, 5960–5971. [Google Scholar] [CrossRef]
  32. Elguebaly, T.; Bouguila, N. Model-based approach for high-dimensional non-Gaussian visual data clustering and feature weighting. Digit. Signal Process. 2015, 40, 63–79. [Google Scholar] [CrossRef]
  33. Franchini, S.; Charogiannis, A.; Markides, C.N.; Blunt, M.J.; Krevor, S. Calibration of astigmatic particle tracking velocimetry based on generalized Gaussian feature extraction. Adv. Water Resour. 2019, 124, 1–8. [Google Scholar] [CrossRef]
  34. Kubo, Y.; Takamune, N.; Kitamura, D.; Saruwatari, H. Blind Speech Extraction Based on Rank-Constrained Spatial Covariance Matrix Estimation With Multivariate Generalized Gaussian Distribution. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 1948–1963. [Google Scholar] [CrossRef]
  35. Hristova, H.; Le Meur, O.; Cozot, R.; Bouatouch, K. Transformation of the multivariate generalized Gaussian distribution for image editing. IEEE Trans. Vis. Comput. Graph. 2017, 24, 2813–2826. [Google Scholar] [CrossRef]
  36. Verdoolaege, G.; Scheunders, P. Geodesics on the manifold of multivariate generalized Gaussian distributions with an application to multicomponent texture discrimination. Int. J. Comput. Vis. 2011, 95, 265. [Google Scholar] [CrossRef]
  37. Rami, H.; Belmerhnia, L.; El Maliani, A.D.; El Hassouni, M. Texture retrieval using mixtures of generalized Gaussian distribution and Cauchy–Schwarz divergence in wavelet domain. Signal Process. Image Commun. 2016, 42, 45–58. [Google Scholar] [CrossRef]
  38. Verdoolaege, G.; Rosseel, Y.; Lambrechts, M.; Scheunders, P. Wavelet-based colour texture retrieval using the Kullback–Leibler divergence between bivariate generalized Gaussian models. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 265–268. [Google Scholar]
  39. Channoufi, I.; Bourouis, S.; Bouguila, N.; Hamrouni, K. Image and video denoising by combining unsupervised bounded generalized gaussian mixture modeling and spatial information. Multimed. Tools Appl. 2018, 77, 25591–25606. [Google Scholar] [CrossRef]
  40. Channoufi, I.; Bourouis, S.; Bouguila, N.; Hamrouni, K. Color image segmentation with bounded generalized gaussian mixture model and feature selection. In Proceedings of the 4th IEEE International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 21–24 March 2018; pp. 1–6. [Google Scholar]
  41. Allili, M.S.; Bouguila, N.; Ziou, D. Finite generalized Gaussian mixture modeling and applications to image and video foreground segmentation. In Proceedings of the 4th IEEE Canadian Conference on Computer and Robot Vision (CRV’07), Montreal, QC, Canada, 28–30 May 2007; pp. 183–190. [Google Scholar]
  42. Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. Unsupervised learning of finite full covariance multivariate generalized Gaussian mixture models for human activity recognition. Multimed. Tools Appl. 2019, 78, 18669–18691. [Google Scholar] [CrossRef]
  43. Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. A fixed-point estimation algorithm for learning the multivariate GGMM: Application to human action recognition. In Proceedings of the 2018 IEEE Canadian Conference on Electrical & Computer Engineering (CCECE), Quebec, QC, Canada, 13–16 May 2018; pp. 1–4. [Google Scholar]
  44. Gebru, I.D.; Alameda-Pineda, X.; Forbes, F.; Horaud, R. EM algorithms for weighted-data clustering with application to audio-visual scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2402–2415. [Google Scholar] [CrossRef]
  45. Agarwal, P.; Jleli, M.; Samet, B. Banach Contraction Principle and Applications. In Fixed Point Theory in Metric Spaces; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–23. [Google Scholar]
  46. Figueiredo, M.A.T.; Jain, A.K. Unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 381–396. [Google Scholar] [CrossRef]
  47. Hershey, J.R.; Olsen, P.A. Approximating the Kullback Leibler divergence between Gaussian mixture models. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April 2007; Volume 4, pp. IV−317–IV−320. [Google Scholar]
  48. Durrieu, J.L.; Thiran, J.P.; Kelly, F. Lower and upper bounds for approximation of the Kullback–Leibler divergence between Gaussian mixture models. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 4833–4836. [Google Scholar]
  49. Cui, S.; Datcu, M. Comparison of Kullback–Leibler divergence approximation methods between Gaussian mixture models for satellite image retrieval. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3719–3722. [Google Scholar]
Figure 1. The influence of different Gaussian kernel parameter σ on the weights. (a) σ = 1 ; (b) σ = 5 ; (c) σ = 10 ; (d) σ = 50 .
Figure 1. The influence of different Gaussian kernel parameter σ on the weights. (a) σ = 1 ; (b) σ = 5 ; (c) σ = 10 ; (d) σ = 50 .
Jimaging 09 00179 g001
Figure 2. Estimation results of the mixture model parameters with different noise levels: (a) 10% noise, (b) 20% noise, (c) 30% noise.
Figure 2. Estimation results of the mixture model parameters with different noise levels: (a) 10% noise, (b) 20% noise, (c) 30% noise.
Jimaging 09 00179 g002
Figure 3. The results of parameter estimation in the case of component overlap.
Figure 3. The results of parameter estimation in the case of component overlap.
Jimaging 09 00179 g003
Figure 4. Performance analysis of WMGGMM for 2-D data.
Figure 4. Performance analysis of WMGGMM for 2-D data.
Jimaging 09 00179 g004
Figure 5. Registration results of point clouds with different sampling rates. (a) original; (b) centralized; (c) registration.
Figure 5. Registration results of point clouds with different sampling rates. (a) original; (b) centralized; (c) registration.
Jimaging 09 00179 g005
Figure 6. Registration results with 10% (top), 20% (middle), and 30% (bottom) noise levels. (a) original; (b) centralized; (c) registration.
Figure 6. Registration results with 10% (top), 20% (middle), and 30% (bottom) noise levels. (a) original; (b) centralized; (c) registration.
Jimaging 09 00179 g006
Figure 7. The registration results when locally optimal solution exists. (a) original; (b) centralized; (c) registration.
Figure 7. The registration results when locally optimal solution exists. (a) original; (b) centralized; (c) registration.
Jimaging 09 00179 g007
Table 1. Estimation of the mixture model parameters for the first synthetic data set.
Table 1. Estimation of the mixture model parameters for the first synthetic data set.
Default Mixture Model Parameters
π k μ k β k C k ρ k
0.25
(N = 300)
1 1 0.85 3 1 1 5 0.35
0.25
(N = 300)
15 2 0.85 2 0 0 2 0.00
0.25
(N = 300)
1 18 0.85 3 2 2 4 −0.58
0.25
(N = 300)
16 16 0.85 3 1 1 3 −0.33
Estimated mixture model parameters
π k μ k β k C k ρ k
0.2444 0.98 0.90 0.72 1.43 0.57 0.57 2.94 0.28
0.2586 15.11 2.05 0.70 0.60 0.04 0.04 0.67 −0.06
0.2482 1.15 17.99 0.75 1.39 1.00 1.00 2.15 −0.58
0.2486 15.98 15.99 0.77 1.12 0.3 0.3 1.05 −0.27
Table 2. Estimation of the mixture model parameters for the second synthetic data set.
Table 2. Estimation of the mixture model parameters for the second synthetic data set.
Default Mixture Model Parameters
π k μ k β k C k ρ k
0.25
(N = 300)
8 16 0.60 3 2 2 4 −0.58
0.25
(N = 300)
15 2 0.85 2 0 0 2 0.00
0.50
(N = 600)
1 3 0.85 3 1 1 5 0.35
Estimated mixture model parameters
π k μ k β k C k ρ k
0.2596 7.89 16.05 0.76 2.27 1.34 1.34 3.39 −0.48
0.2817 14.90 2.12 0.67 0.62 0.17 0.17 0.78 −0.24
0.4587 0.98 3.11 0.70 0.91 0.61 0.61 2.12 0.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, B.; Najar, F.; Bouguila, N. Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration. J. Imaging 2023, 9, 179. https://doi.org/10.3390/jimaging9090179

AMA Style

Ge B, Najar F, Bouguila N. Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration. Journal of Imaging. 2023; 9(9):179. https://doi.org/10.3390/jimaging9090179

Chicago/Turabian Style

Ge, Bingwei, Fatma Najar, and Nizar Bouguila. 2023. "Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration" Journal of Imaging 9, no. 9: 179. https://doi.org/10.3390/jimaging9090179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop