3.1. Scope
We want to reiterate that Medial Axes or Skeletons are essentially filters. Therefore, according to the characteristics of the filter, some parts of the original solid may be lost. It is not within the scope of this paper to quantify the effect of the filter on the reconstruction of the object based on the Skeleton.
It is also important to note that, in our approach, we frequently use local Principal Component Analysis (PCA) in order to identify Piecewise Linear approximations for curves and surfaces within a point cloud. We do not try to approximate a curve or a surface with only
one straight edge or only
one planar surface. In any case, the initial triangular meshes must be very dense in high frequency neighborhoods in order to respect the Whittaker–Nyquist–Shannon theorem [
36], regardless of any ulterior computation (e.g., Medial Axis, Resampling, Finite Element, etc.).
A glossary is presented at the end of this manuscript, before the references, providing definitions for terms used throughout the text.
3.2. Manifold Learning by Direct Palpation
Our algorithm probes local neighborhoods of the SDF-based point cloud, seeking locally planar surfaces or locally straight segments. This means our algorithm tests the local neighborhoods of the SDF-based point cloud for compliance with the definition of a 2-manifold with border or a 1-manifold with border. Such definitions follow [
37].
Definition 1 (k-manifold with border. k = 1 for 1-manifolds or curves, k = 2 for 2-manifolds for surfaces). A set is a k-manifold with Border if for each point there exists a such that for all radius r with , is isomorphic to either (1) the disk or (2) the half-disk .
This definition informally means that, locally, the neighborhoods look like (a) flat surfaces (2-manifolds) of (b) straight line segments (1-manifolds). Therefore, they can be identified with Principal Component Analysis (PCA).
Our algorithm attempts to identify all point cloud regions that are locally homeomorphic to a plane or to a straight segment. This is an instance of Manifold Learning. This identification is executed through PCA. The regions in which this probe fails (i.e., are neither planes or lines) are graded as “Gray” and processed differently, as they indicate high frequency regions/junctions (
Section 3.7).
PCA has been extensively used in processing and characterizing point cloud data. For example, in [
38,
39] PCA is used for plane fitting on local point cloud regions.
3.6. Dimension Identification of Skeleton Subregions
Our approach first identifies points on that will serve as centers to perform Principal Component Analysis (PCA). To obtain said centers, is downsampled by applying a 3D box grid filter to it.
On each center, a PCA ball is attached, which is an open ball centered in p with radius . The PCA Ball radius is computed as , where d is the average edge length of M and is a safety factor, required to be input by the user.
The procedure for the construction of the 1D and 2D Skeleton subsets (C and S, respectively) from is presented in Algorithm 1.
In the first 13 lines, we iterate over each PCA ball and check what points of it encloses. PCA is performed with this enclosed point subset, and its dimension is established by checking the proportions of the magnitude of the three largest eigenvalues obtained by PCA. If there is no eigenvalue significantly larger or smaller than the others, the enclosed point subset is marked as neither 1D or 2D. By the end of the first 13 lines, we obtain the following subsets: (1) 1D identified points , (2) 2D identified points and (3) unidentified points , also called Gray Zones.
The values for the 1st-to-2nd and 2nd-to-3rd eigenvalue ratios
and
are manually input. The higher these values, the more the enclosed point subset needs to strictly resemble a curve or a surface. Figure 8 shows an overview of how these parameters are used in the overall pipeline (more details in
Section 3.8).
From lines 14 to 23, we cluster the and point clouds, in order to later identify separate curves and surfaces.
One-dimensional clusters . For each
point set cluster, we fit a sequence of linear segments (PL approximation [
41]). Notice that if a set of points resembles a line, PCA will associate the eigenvector (local line vector) with the largest eigenvalue of the point set auto-covariance matrix.
Two-dimensional clusters . For each
point set cluster, we apply ball-based PCA identifications, which render (point, normal) pairs. We use this point and normal information to perform point cloud triangulation (Point Cloud Library PCL [
42]). Notice that if a set of points resembles a surface, PCA will associate the eigenvector (local surface normal) with the smallest eigenvalue of the point set auto-covariance matrix.
3.7. Junction of Disconnected Subregions
After obtaining all the sets of definite 1D and 2D Skeleton subsets
C and
S, we still need to achieve a fully connected Skeleton, because all
C and
S are disjointed, as shown in
Figure 3b.
We use information from the Gray Zones point cloud to help establish connections between Skeleton subsets. In the SRF point cloud, subsets lacking a clear 1D or 2D character correspond to neighborhoods in the Skeleton where transitions between 1D and 2D structures appear. These regions are called “gray” in the manuscript. Thus, points are located in the missing Skeleton regions that bridge the various curves C and surfaces S.
Our objective is to create bridges or connections located in these Gray Zones. Bridges are surface meshes that serve as supports for connecting definite 1D/2D Skeleton subsets. The bridges do not reach the subsets that they communicate. Instead, a blend surface is later used to fill the remaining gap between the bridge and the Skeleton subsets, as described in Section Mesh Stitching.
As shown in Algorithm 2, we first cluster
, and for every cluster, an optimal bounding box is computed and scaled. We scale the bounding box to identify what points from all the Skeleton subsets are inside and thus involved in a joining process (
Figure 4a). The disposition of these identified points will dictate the type of union to be performed.
Case 1. Boundary-based bridges among MA subsets. In the first case, N Skeleton subsets are to be connected through their boundaries. Joining lines are established between all the points enclosed by the bounding box (see Algorithm 3), except between those belonging to the same subset, as shown in
Figure 4b.
After joining the points, the joining lines are sampled, as shown in
Figure 4c. We want to later triangulate these sampled points to form a surface mesh. Thus, points too close to the line endpoints are discarded to avoid intersections with the Skeleton subsets. Finally, the resulting sampled point cloud is triangulated (
Figure 4d), following the same approach as the 2D Skeleton subset triangulation in
Section 3.6.
If only two Skeleton subsets are to be joined, we still create joining lines and sample them as in Case 1: Boundary-based bridges among MA subsets. However, we check if the sampled point cloud resembles a line by doing PCA with said point cloud [
41]. If the point cloud was identified as 1D, we choose the joining line that is closer to the centroid of said point cloud and thus skip the process of creating a bridge mesh.
Case 2. Interior-based connections among MA subsets. When joining two Skeleton subsets and not all the candidate points lie on the boundaries, we do not create bridges and instead produce connections according to two main scenarios: (1) Candidate points resemble a polyline, and (2) candidate points do not resemble a polyline. The polylines do not need to be closed; however, we do not handle self-intersecting polylines. A combination of possible scenarios can be seen in
Figure 5. In Section Mesh Stitching, we explain the algorithm to create the connections for said scenarios.
Mesh Stitching
Once a bridge is created, the next step is to fill the gap between the Skeleton subsets and the bridges in order to obtain a fully connected Skeleton. We first address the case where the bridge connects Skeleton subsets by its boundary.
We begin by identifying pairs of vertices between the Skeleton subset and the bridge that are within a specified proximity threshold (
). See
Section 3.8. Using these matched vertices, we extract a sequence of connected points that forms a closed path spanning both the Skeleton subset and the bridge mesh.
This path contains points from both structures. To construct a blending structure in it, we apply Algorithm 4 as follows:
We first take the sequence of points from the Skeleton subset that lie on the path and iterate over them. For each pair of consecutive points, we attempt to form a triangle using those two points and an optimal third point selected from the bridge-side points along the path (lines 2–6). We then repeat the process symmetrically: iterating over the bridge-side points and selecting the third point from the Skeleton subset.
In other words, we extract a sequence of points
P from the Skeleton subset and a sequence of points
Q from the bridge, and pass these as inputs to Algorithm 4.
Algorithm 4: Blend Creation via Simple Loop Traversing |
![Algorithms 18 00546 i004 Algorithms 18 00546 i004]() |
Figure 6 illustrates an example in which the algorithm is applied twice to connect a bridge with two different Skeleton subsets.
In the first step, the black point cloud is partitioned into P and Q, where P consists of a sequence of points on the 1D Skeleton subset, and Q consists of the corresponding point sequence on the bridge. The blending mesh is computed using these inputs. In the second step, the red point cloud is similarly divided into P and Q, and the algorithm is applied again.
The resulting blending meshes connect the bridge to the Skeleton subsets, as shown in
Figure 6b.
The bottom line of the described Algorithm 4 is the creation of a blend between two polylines. Therefore, the same algorithm can be applied to join the identified points in Case 2: Interior based connections among MA subsets, where the candidate points do not lie on the boundary but do resemble a polyline (
Figure 5a,b).
In the case where candidates in neither Skeleton subset resemble a polyline (
Figure 5c), we choose the joining line that is closer to the centroid of the candidate points. An example of this type of union is presented in
Figure 7.
3.8. Parameter Tuning
Currently, our approach requires the manual input of the parameters , and . To compute the PCA ball radius we opt for making the radius a function of an statistical value based on the edge lengths of the input shape M, in this case .
Since the parameters and dictate the identified dimension of subsets, they can be tuned to obtain Medial Axes that are fully 1D or 2D. In our experiments, we found that a value of allows for a good dimension identification process, along with .
An overview of the parameter tuning process is shown in
Figure 8, and it goes as follows: we first extract an average length value
from the set of edges
of the input mesh. We use this value to compute each PCA ball radius
. We then use
and
to identify Curves, Surfaces, and a Gray Zone point cloud
. The point cloud is clustered by grouping points that are closer than
to each other. Finally, with the clustered Gray Zones
, we carry out the joining process of the Skeleton.
3.9. Parameter Influence
The parameter directly influences the size of the PCA Balls used to classify local neighborhoods of the SRF point cloud. We compute the radii of the PCA Balls as , where is the average edge length of the input mesh. This means that bigger values will generate bigger PCA Balls.
Ideally, the PCA Balls must be large enough to correctly capture the underlying dimension of each local neighborhood of the SRF Point Cloud
, but not too large that a single PCA Ball encloses unrelated
regions. In
Figure 9, a
that samples (with noise) a circumference is used to show how the value of
may change the underlying dimension of the enclosed point cloud subsets.
The values for that we provide in the following paragraphs correspond to results from our experiments, where was, on average, 1% of the length of the diagonal bounding box of the input mesh.
If
is too small (e.g.,
), the enclosed point cloud subsets of the
will lack a clear tendency towards a curve or a surface. This case is shown in
Figure 9a, where each enclosed point cloud subset resembles a solid sphere, causing eigenvalue ratios of
and
, i.e., they resemble a solid.
Moderate
values (e.g.,
) create PCA Balls that enclose point cloud subsets with clearer curve or surface tendencies. This case is shown in
Figure 9b, where each enclosed point cloud subset resembles a solid but tubular shape, causing the eigenvalue ratios
and
, i.e., each point cloud subset potentially samples a curve. This is the desired behavior in this particular example, given that
originally samples a 1D structure.
Too large
values (e.g.,
) can create PCA Balls that enclose very large portions of the
, which could cause loss of detail or misidentification of the underlying Skeleton structure. In
Figure 9c, the resulting enclosed point subset is the entire
, causing the eigenvalue ratios to be
and
, i.e., the points potentially sample a surface. This is an undesired behavior, since
originally samples a 1D structure.
Let be the eigenvalues obtained from applying PCA on a local neighborhood of the SRF point cloud (Algorithm 1 line 4). is the minimum ratio that must be satisfied in order for the enclosed point cloud subset to be classified as belonging to a curve. is the minimum ratio that must be satisfied in order for the enclosed point cloud subset to be classified as belonging to a surface.
Setting means that all local neighborhoods of the SRF point cloud will pass the check of curve tendency, since the eigenvalues from each PCA Ball will always hold that .
In the same way, setting means that all point subsets of will pass the check of surface tendency, since the eigenvalues from each PCA Ball will always hold that .
This means that, if , all the local neighborhoods within the SRF point cloud will be classified exclusively as 1D or 2D, depending on the order in which the eigenvalue ratios are checked (see Algorithm 1 lines 4 to 12).
Figure 10 shows how the final dimension of the Skeleton of an input mesh
M changes depending on the values set for
and
. If
(e.g.,
) then the Skeleton will likely have a mixture of 1D and 2D elements. This behavior can be seen in
Figure 10b.
If
(e.g.,
and
) then all local neighborhoods of
will likely fail the curve tendency test and will pass the surface tendency test. This means that all the local neighborhoods of
can be used only to build 2D structures, leading to pure 2D Skeletons (
Figure 10c).
If
(e.g.,
and
) then all local neighborhoods of
will likely fail the surface tendency test and will pass the curve tendency test. In this setting, all the local neighborhoods of
will be used to build 1D structures, leading to pure 1D Skeletons (
Figure 10d).
Figure 10d also shows that forcing apparent 2D Skeleton regions to be 1D can lead to artifacts in the final Skeleton, such as loss of centrality.
Scale Invariance
Let
,
be an arbitrary 3D point cloud, and let
denote its centroid. The covariance matrix
K can be computed as
If the point cloud is uniformly scaled by a factor
s, the new covariance matrix
becomes
This shows that uniformly scaling the input by a factor
s results in the covariance matrix being scaled by a factor of
. Moreover, if
is an eigenvalue of
K, with corresponding eigenvector
v, then
Thus, if is an eigenvalue of the original covariance matrix K, then is an eigenvalue of the scaled covariance matrix . Additionally, the eigenvectors remain unchanged. In other words, scaling a point cloud by a factor s scales the eigenvalues produced by PCA by , while preserving the eigenvectors.
In our method, the local structure of the SRF point cloud
is classified using the ratios of the eigenvalues
obtained from local PCA. Specifically, a region is classified as curve-like if
, and as surface-like if
. When the input mesh is scaled by a factor
s, the point cloud
is also scaled accordingly. The new eigenvalue ratios become:
This demonstrates that the eigenvalue ratios are invariant to the scale of the input data. Consequently, the parameters and are independent of the scale of the point cloud. Similarly, the parameter is also scale-invariant, since it appears in the expression , where is the average edge length of the input mesh, and therefore scales as .
This scale invariance makes our method robust across datasets with different units or scale conventions (e.g., meters vs millimeters).