Next Article in Journal
Conscious Causality, Observer–Observed Simultaneity, and the Problem of Time for Integrated Information Theory
Previous Article in Journal
Puppet Dynasty Recognition System Based on MobileNetV2
Previous Article in Special Issue
(HTBNet)Arbitrary Shape Scene Text Detection with Binarization of Hyperbolic Tangent and Cross-Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System

by
Ivan Y. Alba Corpus
1,2,
Wendy Flores-Fuentes
1,*,
Oleg Sergiyenko
3,
Julio C. Rodríguez-Quiñonez
1,
Jesús E. Miranda-Vega
2,
Wendy Garcia-González
1 and
José A. Núñez-López
3
1
Facultad de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico
2
IT de Mexicali, Tecnológico Nacional de México, Mexicali 21376, Mexico
3
Instituto de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21100, Mexico
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(8), 646; https://doi.org/10.3390/e26080646 (registering DOI)
Submission received: 20 May 2024 / Revised: 17 July 2024 / Accepted: 18 July 2024 / Published: 30 July 2024

Abstract

:
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using a rotating table to explore the potential of the system for 3D scanning at reduced resolutions. The experiments undertaken searched for optimal scanning windows and used statistical data filtering techniques and regression models to find a method to generate a 3D scan that was still recognizable with the least amount of 3D points, balancing the number of points scanned and time, while at the same time reducing effects caused by the particularities of the TVS, such as noise and entropy in the form of natural distortion in the resulting scans. The evaluation of the experimentation results uses 3D point registration methods, joining multiple faces from the original volume scanned by the TVS and aligning it to the ground truth model point clouds, which are based on a commercial 3D camera to verify that the reconstructed 3D model retains substantial detail from the original object. This research finds it is possible to reconstruct sufficiently detailed 3D models obtained from the TVS, which contain coarsely scanned data or scans that initially lack high definition or are too noisy.

1. Introduction

The progress of 3D scanning technology has revolutionized the way we capture and replicate the physical world in digital form. From documentation of historical artifacts [1,2] to the precision required in industrial design, the applications of this technology are vast and varied. Of these systems, the versatility of laser scanning systems as remote sensing technology is the reason they are used across a wide array of fields. These systems are integral to various sectors, including healthcare, geospatial surveying, additive manufacturing, the mining industry, and reverse engineering, to highlight just a few. The application of laser scanning in the medical field enables precise mapping and modeling of physical forms, while in geospatial surveying, it facilitates the detailed assessment of land topographies and architectural structures. In the field of additive manufacturing, laser scanning contributes to the accuracy and fidelity of 3D-printed objects, and in mining, it aids in the volumetric analysis and planning of extraction sites. Reverse engineering processes benefit significantly from laser scanning by providing a means to recreate models of existing physical objects with high accuracy.
Multiple studies illustrate the influence of laser scanning technologies in both research and practical applications. For instance, ref. [3] explores the measurement of vegetation canopy structures using laser scanning, showcasing the method’s capability to capture detailed canopy data in situ without direct contact. The authors of [4] introduce innovative vegetation indices derived from terrestrial laser scanning, which effectively quantify the 3D spatial organization of plant communities. Another example is [5], which presents an analysis of a 3D laser tracking scanner system, focusing on its immunity to the effects of ambient sunlight and its geometric resolution. For a thorough exploration of the historical development and diverse applications of 3D laser scanners, ref. [6] offers an extensive overview, tracing the technology’s evolution and its impact across different industries. Over the years, the evolution of 3D scanning technology has led to the development of increasingly sophisticated devices that are adept at digitally capturing objects in their immediate surroundings, accurately registering even the smallest details and complex surfaces without the need for pre-defined reference points or markers. Additionally, many contemporary 3D laser scanners enhance their data collection capabilities by integrating GPS, providing further insights into the surfaces being scanned [7,8]. These technologies use light to amass substantial volumes of data—encompassing millions of discrete points—which are subsequently compiled into large point cloud files. This approach not only significantly improves the precision of digital representations but also broadens the scope for detailed examination and application of the gathered information. These devices have improved their underlying principles to not only become more efficient but also to capture highly accurate point clouds. The versatility of these 3D point clouds has led to their widespread application across various domains, including the inspection and mapping of structures [9] and landscapes [10], object recognition [11,12], and autonomous navigation [13], among others. Despite their utility, point clouds often exhibit minor variances in the coordinates of each scanned point, indicating that details finer than the scanner’s resolution may not be accurately captured. This limitation introduces a certain degree of imprecision to digital scanning, similar to the resolution constraints found in photographic scanners, thus establishing a limit to the proximity at which each laser track can accurately reflect the subject’s details.
In the domain of industrial applications, two primary types of scanners have emerged as leaders: contact and non-contact scanners. Contact scanners are known for their precision, while non-contact scanners boast quicker operation. The majority of non-contact 3D spatial coordinate measurement systems depend on optical sensors coupled with sophisticated signal processing techniques. These systems capture the surfaces of objects in discrete segments to ascertain their coordinates, facilitating the creation of a point cloud. This cloud can then be analyzed to accurately reconstruct an object’s shape and dimensions [14]. Such contactless technologies are primarily categorized into two main groups based on their operation: camera-based and laser-based. Additionally, they can be distinguished by their form, being either handheld or desktop devices. Handheld scanners often employ structured light technology, which projects a specific light pattern onto the target object. Dual cameras then capture and analyze the projected light patterns, discerning differences across various vision fields. Meanwhile, other camera-based scanners leverage an arranged array of sensors to gather 3D data from multiple angles around the object [15], enhancing the depth and accuracy of the data collected. Some devices enhance accuracy through the use of calibration patterns, enabling precise measurements from numerous perspectives around the object. In contrast, laser triangulation scanners use one or more laser beams directed at specific points on the object’s surface [16]. This method measures the distance between the scanner and these points, iteratively capturing spatial data to cover the entire area within the desired field of view. Certain laser scanners are equipped with level compensation mechanisms, designed to counterbalance any movements during the scanning process, thereby ensuring the integrity of the data collected. Moreover, specialized scanners might incorporate rotating platforms to adjust the positioning of the subject and the direction of light reception, allowing the 3D scanner to achieve measurements with enhanced precision [17]. The prevalence of optical sensor-based technologies derives from their non-invasive nature, eliminating the need for direct contact with the objects under study. Among their most valued attributes is the rapid rate at which measurements can be acquired. Nonetheless, notable limitations of these systems are their susceptibility to measurement inaccuracies caused by optical ambient noise and the complexities inherent in their design. Despite the advances in precision and accuracy, the data procured from 3D scanning processes frequently exhibit characteristics that can significantly undermine the quality of the resulting models. These imperfections manifest in various forms, including low resolution, noise, and the presence of outliers, each contributing to a reduction in the clarity and usability of the data. Moreover, while camera-based scanning technologies are known for producing data that may include a higher proportion of outliers [18], particularly under adverse lighting conditions or in environments with reduced visibility due to factors like rain or fog [19], laser-based scanners, though more precise, can be slower and may struggle with accurately detecting reflections from certain surfaces, potentially leading to additional outlier data. To address these problems, various methods for detecting and filtering outlier data in point clouds have been developed and applied across a range of fields, including bioengineering, reverse engineering, civil and archaeological engineering, as well as electronics and industrial automation. The implementation of point cloud outlier filtering techniques facilitates the creation and reconstruction of meshes and geometric models that closely resemble physical objects in appearance and dimensions. For instance, ref. [20] discusses several point cloud filtering methods, and ref. [21] have developed an algorithm that leverages surface variation factor segmentation, applying different filtering techniques to various sections of the point cloud for enhanced results across different regions. Additionally, ref. [22] explores the use of AI to design adaptive filters that consider the size and shape of the specific point cloud region being processed, further refining the accuracy and reliability of the resultant digital models. Extensive research on 3D scanning technologies and techniques was conducted prior to the experiments detailed in this paper, leading to the publication of a comprehensive review [23]. This review provides an in-depth analysis of various 3D scanning methods, their applications, and advancements in the subject.
This paper is structured as follows:
First, in the Materials and Methods section, the basic function of the TVS is explained and the procedure to scan objects using this system is described. An explanation of the methods used to reduce noise in the point clouds obtained from the TVS, as well as smoothing techniques and the algorithms used to compare the results are also included in this section. Second, the methodology and experimental results are presented, where details of the experimentation conducted, including the scanning of the original volumes, rotation of coordinates, reduction of outliers and entropy are described, as well as the results of these algorithms in the form of tables. In this section, a direct link to visualize the 3D point clouds results is included as well as flat images of the scanned faces. Third and last, the discussion and conclusions are presented, where the results of the experiments are summarized, and future possible work is discussed.

2. Materials and Methods

At the Autonomous University of Baja California, a method referred to as dynamic triangulation has been in use to measure spatial coordinates. This method involves adjusting the angles of the system’s laser emitter and receiver through motor control. The process allows the laser to target a specific area within the viewing field, reflecting off an object’s surface and being detected by a scanning aperture. This aperture rotates at a steady rate, with the objective of capturing the reflected laser light. Based on this principle, the Engineering Institute of the Autonomous University of Baja California has developed three prototypes of a “Technical Vision System” (TVS), designed to measure 3D coordinates within a given field of view. These prototypes use a single light sensor for gathering surface information from scanned objects, supporting a variety of applications. Uses for the system range from navigation and mapping in industrial settings [24], including pipelines [25], to structural monitoring and aiding in the medical field for the detection of spinal column issues. Some of the systems based on this principle use this technique in combination with cameras to adjust to the object’s position in real-time. Using this method, information on the energy distribution of the scanned object can also be obtained as well as information about the color of the material, alongside the angles of the actuators. With this information, the location in space of the object can be triangulated using basic trigonometry.

2.1. Technical Vision System Specifications and Object Scanning

TVS prototype number 2 (Figure 1), in particular, focuses on capturing the planar faces of objects to create point clouds. This system consists of three primary components: a laser positioner (LP), a scanning aperture (SA), and a positioning arm (PA). These elements work together using the principle of dynamic triangulation. This method measures the distances and angles of 3D points on the observed objects and converts them into rectangular coordinates to assemble the point clouds, while the system adjusts the angles between the system’s scanning aperture and the laser positioner to accurately measure the objects. A rotating base is used to obtain data from multiple faces of an object for later processing into a complete object, following the method described in Figure 2.
The better performance of TVS occurs at position 1 in its field of view for 3D points scanning. It has been previously determined, however, that the field of view was focused on a front view of the object. In this research, with the purpose to obtain multiple faces to integrate the whole volume, it was identified that in some table rotatory angles, the body presented laser occlusions due to protuberances of the object (as represented in Figure 1 with skull at position 1; with this table rotation angle the laser is occluded by the nose of the skull and can not be reached by the scanning aperture. Note that this occurs only for some of the 3D coordinates, not for all the 3D coordinates of the face view ), which motivated evaluation of measurements in position 2.
Figure 3 and Figure 4 illustrate the key components of the TVS prototype 2, detailing the construction and functionality of the system. Figure 3 focuses on the laser positioner’s elements, highlighting: (1) a stepper motor enabling rotation around its axis; (2) another stepper motor attached to a 45 ° mirror for directing the laser beam; (3) a gear-worm screw combination facilitating the rotation of the positioning arm; (4) a similar gear-worm mechanism for adjusting the 45 ° mirror’s position; and (5) the 45 ° mirror itself, which reflects the laser towards the target object.
Conversely, Figure 4 depicts the scanning aperture, comprising: (A) a 45 ° mirror for laser reflection; (B) lenses that focus the laser light; (C) a DC motor for mirror rotation; (D) a photosensor for detecting the laser light; and (E) Teflon bearings for smooth movement. The scanning aperture’s DC motor rotates freely, powered by a regulated voltage source to achieve the desired speed. However, due to this setup, the motor’s speed may vary, necessitating speed measurements at each revolution with an opto-coupler to maintain accuracy. The laser positioner employs a 10 mW red laser, aimed using stepper motors capable of 19,200 steps per revolution. The system’s photosensor, a phototransistor model BPW77N by VISHAY Semiconductor, captures the reflected laser light for further processing.
The scanning aperture acts as a receiver for the system. For the experiment, a specific scan window is programmed so that only 3D information of the object under measurement is obtained. Then, a rotatory base is used to scan a determined number of planar faces of the object. The scanning aperture then gathers data for each point, such as the energy distribution curve of each measured point, by reflecting the laser off the object onto a photosensor. The laser positioner, on the other hand, is responsible for directing the laser beam across different programmed points over the object in the defined scanning window, allowing the system to measure the reflection angles ( B i j ) via the scanning aperture.
After capturing the data, they are stored in various files for further processing. The next step is to identify the peak of the signal for each point, determine when this peak signal occurs, and relate it to the position of the scanning aperture at that time. This step is applied to every point within the set scanning window, preparing the foundation for the detailed work of creating a point cloud. The next part of the process uses the angles of the laser positioner ( C i j ) and the tilt ( β angle) of the TVS itself. By applying basic trigonometry rules, shown in Equations (1)–(3), the ( x , y , z ) coordinates for each point are calculated. Where a is the distance between PL and SA. This process turns the collected data points into a structured point cloud.
x i j = a sin B i j sin C i j sin ( B i j + C i j )
y i j = a ( 1 2 cos B i j sin C i j sin ( B i j + C i j ) )
z i j = a sin B i j sin C i j tanh β sin ( B i j + C i j )

2.2. Noise and Outliers

In the scanning process, various factors can introduce noise and outliers into the data captured by this system. These irregularities are often due to problems such as the scanning aperture mirror speed not being constant, difficulties in accurately detecting the energy center of the signal during data processing, and the tilt angle of the system. These elements can complicate the accuracy of the data, making it challenging to maintain the integrity of the results. Noise and outliers can also obscure important details in the point clouds, requiring filtering to improve the results. While triangulation errors are more common along the X-axis, outliers can also appear on other axes, making it difficult to maintain clarity in the point clouds. Addressing these issues is crucial for refining the data and ensuring that the point clouds produced are as precise and representative of the scanned object as possible.
Outliers refer to data points that significantly differ in value from the majority of a dataset, being much higher or lower than the surrounding data. Within the context of point clouds, these outliers can negatively impact both the visualization of the point clouds and any further applications of the data. Their presence can distort the overall representation of the object, leading to inaccuracies in analysis or challenges in utilizing the data for modeling or other purposes.

2.3. Data Processing and Outlier Removal

For the majority of the outlier data obtained, statistical filters are applied on different axes of the point clouds obtained, depending on several factors; in particular, three methods are mentioned for this purpose:

2.3.1. Interquartile Method

This method [26] applies to univariate data, which can be segmented into quartiles. The first quartile signifies that 25% of the data points fall below this value, the second quartile represents the median of the dataset, and the third quartile shows that 25% of the data points exceed that value. The interquartile method is employed to determine the data range that should be kept, given that:
[ Q 1 K ( Q 3 Q 1 ) , Q 3 + K ( Q 3 Q 1 ) ]
where K is a given constant and has a non-negative value, which can be used to control the amount of data to be detected as outliers, while Q 1 and Q 3 correspond to the quartiles used as upper and lower thresholds for the data. Values outside the threshold defined by Equation (4) are considered outliers.

2.3.2. Modified Thompson Tau Method

The modified Thompson Tau method [27] is an alternative strategy for identifying outliers within a dataset. Here, the approach requires dealing with a single variable dataset, assessing potential outliers one at a time and removing those found to be beyond two standard deviations from the dataset’s mean.
To apply this technique, one must first determine the mean absolute deviation δ i, followed by calculation of the Tau threshold τ :
τ = t α / 2 ( n 1 ) n n 2 + ( t α / 2 ) 2
where t α / 2 is the inverse cumulative distribution function and n is the size of the dataset. In this way, a value is determined to be an outlier if it is found that δ i > τ S , where S is the standard deviation.

2.3.3. Chi2 Distribution Quantiles

In this method [28], for each sample of dimension d, the Mahalanobis distance is determined given that:
D i = ( X i X ¯ ) 1 ( X i X ¯ )
where all values of D i exceeding the calculated critical value are detected.

2.3.4. Smoothing of the Point Cloud

After the outlier processing, the Moving Least Squares (MLS) [29] algorithm is used. This technique is based on weighted linear regression. The MLS method is particularly effective for reconstructing surfaces from point sets, and is often employed to create 3D surfaces from point clouds through downsampling or upsampling. Its utility is notable in handling data that are irregularly spaced or contain noise, as it provides a flexible fitting approach that adapts to the local data structure.
S = i = 1 n w i · ( y i f ( x i ) ) 2
where S is the sum that needs to be minimized, n is the number of data points, and w i represents the weight for the i-th data point, often a function of the distance to emphasize points closer to the point of interest. y i and x i are the coordinates of the i-th data point and f ( x i ) is the value of the approximation function at x i .

2.4. Point Cloud Registration

Used in computer vision, robotics, and 3D reconstruction, point cloud registration is the process of aligning multiple point clouds, each captured from different positions within the same environment, into a unified coordinate framework. This process results in a consolidated dataset that accurately represents the scene being surveyed [30]. For this experiment, two similar point cloud registration algorithms were used, the Iterative Closest Point (ICP) for local registration and the Random Sample Consensus (RANSAC) for global registration via the Open3D library [31].

2.4.1. Iterative Closest Point Registration (ICP)

This process involves aligning two or more point clouds by identifying a spatial transformation, which could involve scaling, rotation, or translation and reduces the disparities between them. The ultimate goal is to combine these multiple datasets into a cohesive, globally consistent model or coordinate framework. The process begins with two point clouds and an initial transformation that approximately positions the source point cloud in alignment with the target point cloud. The result is a transformation that closely and accurately aligns the two point clouds.
E ( R , t ) = i = 1 N ( R · p i + t ) q i 2
where R is the rotation matrix, t is the translation vector, p i are the points from the source dataset, q i are the corresponding closest points in the target dataset, and N is the number of point pairs.

2.4.2. Random Sample Consensus Registration (RANSAC)

Another category of registration techniques, referred to as global registration, encompasses algorithms that do not necessitate an initial alignment to start. Typically, yielding less precise alignments, this method utilizes RANSAC [32] for the initial alignment of the two point clouds, and then ICP is used for the comparison.

2.5. Ground Truth

For comparison with the original objects’ dimensions in the experiment, an Intel RealSense camera was used to measure the proportions of the objects. This camera integrates an advanced stereoscopic vision system, capturing depth information of objects in their field of view in addition to traditional 2D images.

2.6. Reconstruction

Reconstruction uses the screened Poisson algorithm [33], which is used to create a smooth continuous surface representing the scanned object or scene from a point cloud. It is based on the Poisson equation, where in a three-dimensional Cartesian coordinate system, it takes the form:
2 x 2 + 2 y 2 + 2 z 2 φ ( x , y , z ) = f ( x , y , z )
where φ and f are real or complex functions.

3. Methodology and Experimental Results

The next section outlines the method used to perform several 3D scans on different objects, using two configurations. It describes how a 3D scan performed by the TVS 2 on different sides of an object is compared to scans from an Intel RealSense camera, using a point cloud registration algorithm for the comparison. The results of this process are then presented.

3.1. Design of Experiment

The main experiment involved conducting 3D scans to evaluate the performance of the Technical Vision System (TVS) under various scanning resolutions and window configurations. The objective was to compare the efficiency of 3D scans produced by the TVS 2 across multiple faces of an object utilizing both a standard scanning setup and a scan made using an Intel RealSense camera. This comparison focused on assessing the ability of the TVS 2 to maintain sufficient detail and accuracy against scans from an Intel RealSense camera, to find a way to minimize time and scan windows when utilizing the TVS system, and then employing a point cloud registration algorithm for analysis.
Factors considered during these experiments included the ambient lighting conditions in the laboratory, adjustments to the scanning aperture’s (SA) mirror speed to test for consistency and noise reduction, and the spatial arrangement between the TVS 2. The following Table 1 summarizes the experimentation factors ranges and their levels:
The experiment was conducted using a rotating base (for object orientation related to the front of TVS) controlled by a stepper motor and a reduction gear for precise manipulation. The chosen scanned objects for the experiment included letter and number-shaped figures and a polystyrene foam skull. The number of faces to be scanned for each object was defined, as well as the scanning resolution—determined by the number of points to be obtained—and the number of light sensor readings per scanned point. The object on the rotating base was positioned in two different locations in front of the TVS. These settings can be appreciated in Table 2.
The TVS system scans each pre-defined face of the object and positions the laser at specific points defined as part of the scan window and resolution settings. It records the position data of each TVS component for later compilation, along with the voltage readings from the system’s photosensor, which employs a basic common-collector arrangement with a resistor. Once data collection from the scan window is complete, the rotating base moves a preset number of degrees, generating a file for each point defined within the scan window.
After the scanning process, the data are processed on a personal computer. Using Equations (1)–(3), point clouds for each face of the selected objects are generated. Due to the TVS design, these point clouds tend to include noise and outliers. Therefore, various statistical filters are employed to minimize these data anomalies. An example of this reduction is illustrated in Table 3, showcasing the standard deviation reduction on the X-axis for one face of a polystyrene object in the form of the letter “A”. The noise levels and outlier characteristics observed for the letter ‘A’ were representative of all other foam objects examined in this study. The data undergo smoothing through the Moving Least Squares (MLS) method, resulting in a more accurate and smoother representation of the object surface.
Following the data acquisition for each face, the point clouds from the captured faces are translated, using the center of the rotating table as the new center for the point cloud. This involves defining two points, P and O, which represent spatial coordinates in a three-dimensional space as follows: P = [ P 1 P 2 P 3 ] and O = [ O 1 O 2 O 3 ] .
The distance vector between these two points can be calculated as follows: P O = O P . The magnitude of both distance vectors is computed, and the vector to be translated is determined. Subsequently, a transformation matrix is used to perform the point cloud translation:
P = P 1 P 2 P 2 1 = T α P = 1 0 0 V α 0 1 0 V α 0 0 1 V α 0 0 0 1 P 1 P 2 P 2 1 = P + V α
The data are then rotated using simple rotation matrices around the Z-axis.
c o s θ s i n θ 0 s i n θ c o s θ 0 0 0 1 X i Y i Z i 1 = X f Y f Z f 1
Entropy measurements are used to provide a quantitative assessment of the randomness or disorder within the point cloud data generated by the Technical Vision System (TVS). Entropy, a concept from information theory, offers a straightforward metric to assess the variability in the data, which can be indicative of noise levels or scanning inconsistencies. The approximate entropy and Shannon entropy measures can be seen in Table 4 and Table 5.
After the rotation of the points, a “complete” cloud is generated and a 3D solid is created via the screened Poisson algorithm. This reproduction is ready to edit or 3D print (Figure 5).

3.2. Results of the Experiment

After obtaining a “complete” point cloud and mesh from all faces of the object, the ICP algorithm was used to compare the similarities between the point cloud obtained from the RealSense camera and the TVS 2. This section describes the process and results.

3.2.1. Evaluation of Similarities

Our examination focused on specific measurements within recognizable features of the polystyrene objects, skull, letter “A” and number “6”, made from the same material. It was observed that distances critical to the identification of the objects, such as the gap between the eyes and nose on the skull, or the space within the letter “A”, consistently fell within the tolerances allowed by the scanning window, as seen in Table 6. This outcome suggests that, due to the discrete nature of the scanning window in the TVS, perfect replication of corner measurements would require a continuous field of view. Moreover, a cloud registration algorithm was used to facilitate feature matching for global registration by first simplifying the camera-based point cloud for an effective comparison with the TVS-generated cloud, and then performing a manual alignment between the two point clouds.
It was observed that the obtained data from the TVS were not able to obtain a fitness score greater than 0. This was attributed to an excessive amount of noise in the data prior to processing of the point clouds. It was also observed that the simpler geometric objects, such as the letter “A”, reached volumetric fitness scores as high as 0.96, as seen in Table 7 as well as other results. In contrast, the more intricate features of the polystyrene skull posed a challenge, resulting in less precise scans with fitness scores peaking at 0.88 in volumetric form and 0.50 on some of the scanned faces, which can be seen in Table 8, accompanied by representative images of the scanned faces of the skull in Table 9, despite certain faces meeting the criteria for key comparison measurements. This effect was attributed to deformations made by the reflected laser light not reaching the scanning aperture when reaching a surface with a significant curvature, such as the representation of the zygomatic bones on the skull, leading to the algorithm not being able to recognize features in some cases. An example of this deformation can be seen in Figure 6, where part of the TVS-based cloud was aligned with the camera-based point cloud to compare the similarities. It can be observed how the lines of the TVS scanning, represented in black, follow the basic structure of the skull, being interrupted in the area representing the zygomatic bones in the object.

3.2.2. Comparison with Unfiltered Data

The benefits of post-processing when using this system can be seen by comparing the filtered data with their unfiltered counterpart. In particular, for some of the unfiltered faces of the polystyrene skull, the presence of too much noise in the data made it impossible to make accurate key measurements, while the global registration algorithms were ineffective, failing to produce a score greater than 0. This highlights the importance of the statistical filtering methods used and data processing in mitigating inaccuracies in the TVS, thus enhancing the clarity and usability of the generated point clouds. The results of the filtered data in regard to entropy reduction can be seen in Table 4 for the “Approximate Entropy Method” and Table 5 for the “Shannon Entropy Method”. The key measurements from some of the volumes can be seen in Table 6; fitness scores for the volumes and faces can be seen in Table 10 for Letter A and Number 6 faces, in position 1. Note that evaluation in position 2 was not required due to the objects being mainly flat and no occlusion of the laser was produced due to the object face flatness. As shown in Table 8, for skull faces at position 1 and position 2, due to the skull body not being flat and causing occlusion of the laser in some protuberant points (for example, the nose not allowing you to see the cheek), when positioned on position 1, it was necessary to evaluate the measurements at position 2 to avoid occlusion of the laser for the scanning aperture. Finally, Table 7 presents information for the Letter A, Number 6, and Skull volumes. A live 3D view [34] link has also been provided.

3.2.3. Limitations and Uncertainties

The precision of the measurements and the effectiveness of the registration algorithm are bound by the resolution and fidelity of the scanning process, which is influenced by the material properties of the objects and the scanning environment, as well as the age of the system and the behavior of the laser scanning systems. Furthermore, the discrete nature of the scanning window introduces an unavoidable element of approximation in capturing the fine details of complex geometries. Such factors contribute to the variability in fitness scores and highlight the challenges of achieving high fidelity in 3D scanning and model comparison.

4. Discussion and Conclusions

In this study, the performance of the Technical Vision System (TVS) across various scanning resolutions and window configurations was explored with the aim of obtaining a 3D solid faithful to the original scanned object, based on the TVS scanned faces while using a rotating table, focusing on the accuracy of 3D scans in capturing key features of different polystyrene objects. The original aim of the study was to investigate the inconsistencies observed in the quality of the 3D scans produced by the system when scanning objects located at different positions within the system’s field of view. Significant challenges were posed by noise and other particularities of the TVS, such as the speed of the scanning aperture, which had to be addressed by different techniques to mitigate these problems. Among the techniques used, the statistical filters used for data processing and the MLS algorithm greatly increased the fidelity of the 3D models, as well as the fitness rate of the comparison between the cloud points. The study has been successful in identifying areas of the field of view that appear to produce better results in obtaining 3D solids that are comparable to the original in their key features and maintain a similar fitness score to the original object while reducing the approximate entropy of the data. The experiments were performed in a controlled environment due to the original and main purpose of the research being to scan and digitally reconstruct objects using a rotating table. This setup allowed for obtaining multiple scans from different faces of an object for inverse engineering. However, after recognizing the potential of the TVS system for 3D scanning at reduced resolutions and optimal scanning windows, we explored the use of statistical data filtering techniques and regression models. These methods enable the generation of a 3D scan that remains recognizable with the least amount of 3D points. This approach balances the need for accurate object representation with the efficiency required for practical applications, such as robotics navigation. By reducing the number of points needed for a reliable scan, a scanning system like the TVS could more quickly process and understand the scene in the robot’s field of view, supported by deep learning classification algorithms. Potential application in robotics navigation involves balancing the number of points scanned with the time required to comprehend the scene, optimizing both accuracy and processing efficiency.
In this study, it was possible to obtain recognizable solid figures from low-quality, high-noise point clouds created using a Technical Vision System, as well as reducing the entropy in the data by up to 57%. The data obtained from the system represented multiple faces of a rotated object; each face was filtered using statistical methods, such as modified Thompson Tau and chi-square distribution quantiles. The data were then smoothed using the Moving Least Squares algorithm and then rotated to be assembled in a single point cloud representative of the original volume. To validate that the results from the algorithms were accurate, the data were compared to point clouds from an Intel Realsense camera using point cloud registration. The point clouds were compared to each scanned face by the TVS as well as the rotated representations. These results are shown in Table 8 for objects’ faces and Table 7 for volumes. The results show that for each face the maximum fitness score obtained was 50.6%, and once the faces were rotated and assembled in a single point cloud representation of the original volume, the maximum fitness score obtained was 96%. It was also shown that the first position selected for the field of view of the system was the optimum place to scan objects for this particular system.
However, the limitations of our experimental setup, including the choice of object materials and the controlled laboratory environment, suggest the need for further studies to assess the applicability of these techniques in varied scanning contexts. Future research should also consider the integration of alternative algorithms and the exploration of adaptive scanning configurations to address the nuanced challenges presented by different objects’ geometrical complexities. The experimental findings also confirmed that the quality of the scans varied significantly depending on the position, with Position 2 consistently yielding inferior results compared to Position 1. This variation can be attributed to factors such as the angle of incidence of the laser, the distance from the scanner, or potential obstructions that affect the scanner’s line of sight at different positions.
Ultimately, this study contributes to the evolution of the TVS, adding the capability to obtain complete 3D reconstruction of objects, while also offering insights into the optimization of the scanning processes and the potential for enhanced model reconstruction. By continuing to refine these methods, the intricacies of digital modeling can be integrated with the capabilities of the system, paving the way for broader applications in industrial design, heritage conservation, and beyond.
The findings of this work clearly indicate that Position 1 consistently delivers higher-quality scans compared to Position 2, highlighting the necessity of optimal scanner placement to ensure data accuracy. To address the challenges posed by less-favorable scanning positions, future development of the TVS should, therefore, focus on exploring adaptive scanning strategies that can mitigate the effects of sub-optimal positioning and increase the flexibility of the system. The use of more advanced data filtering and enhancement techniques will also be useful in improving the fidelity of 3D scans across varied operational environments, as well as in gaining the ability to compensate for positional disadvantages through more advanced data processing techniques. Additionally, ongoing improvements in the system’s design, operation, and hardware could further stabilize scanning outcomes, ensuring more consistent quality across different operational settings.

Author Contributions

Conceptualization, W.F.-F. and O.S.; methodology, W.F.-F. and I.Y.A.C.; software, I.Y.A.C., W.G.-G. and J.A.N.-L.; validation, J.C.R.-Q.; formal analysis, W.F.-F. and I.Y.A.C.; investigation, W.F.-F. and I.Y.A.C.; resources, W.F.-F. and O.S.; data curation, I.Y.A.C.; writing—original draft preparation, I.Y.A.C. and W.F.-F.; writing—review and editing, J.E.M.-V. and J.C.R.-Q.; visualization, W.F.-F.; supervision, W.F.-F.; project administration, W.F-F.; funding acquisition, O.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The point cloud data are available at https://uabc.appliedphysics.work/datapointclouds2024/MDPIData.zip (accessed on 17 July 2024).

Acknowledgments

This work was made possible thanks to the support of the Universidad Autónoma de Baja California (UABC), Tecnológico Nacional de México, IT de Mexicali (ITM), and Consejo Nacional de Humanidades, Ciencias y Tecnologías (CONAHCYT).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TVSTechnical Vision System
LPLaser positioner
SAScanning aperture
MLSMoving Least Squares
CONAHCYTConsejo Nacional de Humanidades, Ciencias y Tecnologías

References

  1. Atik, M.E.; Duran, Z.; Yanalak, M.; Seker, D.Z.; Ak, A. 3D modeling of historical measurement instruments using photogrammetric and laser scanning techniques. Digit. Appl. Archaeol. Cult. Herit. 2023, 30, e00286. [Google Scholar] [CrossRef]
  2. Kuçak, R.A.; Dervişoğlu, A. The Heritage Building Information Modelling of the Church of St. Sophia in Ohrid, North Macedonia. Mediterr. Archaeol. Archaeom. 2023, 23, 31–43. [Google Scholar] [CrossRef]
  3. Li, X.; Zhao, H.; Liu, Y.; Jiang, H.; Bian, Y. Laser scanning based three dimensional measurement of vegetation canopy structure. Opt. Lasers Eng. 2014, 54, 152–158. [Google Scholar] [CrossRef]
  4. Richardson, J.; Moskal, M.; Bakker, J. Terrestrial Laser Scanning for Vegetation Sampling. Sensors 2014, 14, 20304–20319. [Google Scholar] [CrossRef]
  5. Blais, F.; Beraldin, J.A.; El-Hakim, S. Range Error Analysis of an Integrated Time-of-Flight, Triangulation, and Photogrammetric 3D Laser Scanning System. In Proceedings of the SPIE Proceedings, Orlando, FL, USA, 24–28 April 2000. [Google Scholar]
  6. Edl, M.; Mizerák, M.; Trojan, J. 3D laser scanners, history and applications. Acta Simulatio 2018, 4, 1–5. [Google Scholar] [CrossRef]
  7. Hake, F.; Lippmann, P.; Alkhatib, H.; Oettel, V.; Neumann, I. Automated damage detection for port structures using machine learning algorithms in heightfields. Appl. Geomat. 2023, 15, 349–357. [Google Scholar] [CrossRef]
  8. Moon, S.H.; Nam, O.w.; Choi, Y.S. Accuracy Comparison Analysis of Mobile Handheld and Backpack Laser Scanners for Indoor 3D Mapping. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2023, 41, 529–536. [Google Scholar] [CrossRef]
  9. Luo, R.; Zhou, Z.; Chu, X.; Ma, W.; Meng, J. 3D deformation monitoring method for temporary structures based on multi-thread LiDAR point cloud. Measurement 2022, 200, 111545. [Google Scholar] [CrossRef]
  10. Qi, J.; Lin, E.S.; Tan, P.Y.; Zhang, X.; Ho, R.; Sia, A.; Olszewska-Guizzo, A.; Waykool, R. Representing the landscape visual quality of residential green spaces in Singapore with 3D spatial metrics. Urban For. Urban Green. 2023, 90, 128132. [Google Scholar] [CrossRef]
  11. Hao, W.; Zhang, W.; Liang, W.; Xiao, Z.; Jin, H. Scene recognition for 3D point clouds: A review. Guangxue Jingmi Gongcheng/Opt. Precis. Eng. 2022, 30, 1988–2005. [Google Scholar] [CrossRef]
  12. Stjepandić, J.; Bondar, S.; Korol, W. Object Recognition Findings in a Built Environment. In DigiTwin: An Approach for Production Process Optimization in a Built Environment; Springer Series in Advanced Manufacturing; Springer: Cham, Switzerland, 2022; pp. 155–179. [Google Scholar] [CrossRef]
  13. Kang, L.; Payeur, P. Object Recognition for Autonomous Vehicles from Combined Color and LiDAR Data. Int. J. Comput. Their Appl. 2023, 30, 207–222. [Google Scholar]
  14. Intwala, A.M.; Magikar, A. A review on process of 3D Model Reconstruction. In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 3–5 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2851–2855. [Google Scholar]
  15. Kaczmarek, A.L.; Blaschitz, B. Equal baseline camera array—Calibration, testbed and applications. Appl. Sci. 2021, 11, 8464. [Google Scholar] [CrossRef]
  16. Siekański, P.; Magda, K.; Malowany, K.; Rutkiewicz, J.; Styk, A.; Krzesłowski, J.; Kowaluk, T.; Zagórski, A. On-Line Laser Triangulation Scanner for Wood Logs Surface Geometry Measurement. Sensors 2019, 19, 1074. [Google Scholar] [CrossRef]
  17. Dawei, T.; Hao, L.; Xi, Z. Digital three-dimensional reconstruction technology of cultural relics. Laser Optoelectron. Prog. 2019, 56, 191504. [Google Scholar] [CrossRef]
  18. Rehman, Y.; Uddin, H.M.A.; Siddique, T.H.M.; Haris; Jafri, S.R.U.N.; Ahmed, A. Comparison of Camera and Laser Scanner based 3D Point Cloud. In Proceedings of the 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST), Karachi, Pakistan, 10–11 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  19. Zywanowski, K.; Banaszczyk, A.; Nowicki, M. Comparison of camera-based and 3D LiDAR-based place recognition across weather conditions. In Proceedings of the 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China, 13–15 December 2020; pp. 886–891. [Google Scholar] [CrossRef]
  20. Han, X.F.; Jin, J.S.; Wang, M.J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  21. Jia, C.C.; Wang, C.J.; Yang, T.; Fan, B.H.; He, F.G. A 3D Point Cloud Filtering Algorithm based on Surface Variation Factor Classification. Procedia Comput. Sci. 2019, 154, 54–61. [Google Scholar] [CrossRef]
  22. Lee, S.H.; Kim, C.S. SAF-Nets: Shape-Adaptive Filter Networks for 3D point cloud processing. J. Vis. Commun. Image Represent. 2021, 79, 103246. [Google Scholar] [CrossRef]
  23. Flores-Fuentes, W.; Trujillo-Hernández, G.; Alba Corpus, I.Y.; Rodríguez-Quiñonez, J.C.; Mirada-Vega, J.E.; Hernández-Balbuena, D.; Murrieta-Rico, F.N.; Sergiyenko, O. 3D spatial measurement for model reconstruction: A review. Measurement 2023, 207, 112321. [Google Scholar] [CrossRef]
  24. Garcia-Gutierrez, X.D.J.; Alaniz-Plata, R.; Trujillo-Hernandez, G.; Alba Corpus, I.Y.; Flores-Fuentes, W.; Rodriguez-Quinonez, J.C.; Hernandez-Balbuena, D.; Sergiyenko, O.; Gonzalez-Navarro, F.F.; Mercorelli, P.; et al. Obstacle Coordinates Transformation from TVS Body-Frame to AGV Navigation-Frame. In Proceedings of the 2022 IEEE 31st International Symposium on Industrial Electronics (ISIE), Anchorage, AK, USA, 1–3 June 2022; Volume 2022, pp. 589–593. [Google Scholar] [CrossRef]
  25. Sepulveda-Valdez, C.; Sergiyenko, O.; Alaniz-Plata, R.; Nunez-Lopez, J.A.; Tyrsa, V.; Flores-Fuentes, W.; Rodriguez-Quinonez, J.C.; Mercorelli, P.; Kolendovska, M.; Kartashov, V.; et al. Laser Scanning Point Cloud Improvement by Implementation of RANSAC for Pipeline Inspection Application. In Proceedings of the IECON 2023-49th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 16–19 October 2023. [Google Scholar] [CrossRef]
  26. Vinutha, H.P.; Poornima, B.; Sagar, B.M. Detection of Outliers Using Interquartile Range Technique from Intrusion Dataset. In Information and Decision Sciences; Satapathy, S.C., Tavares, J.M.R., Bhateja, V., Mohanty, J.R., Eds.; Springer: Singapore, 2018; pp. 511–518. [Google Scholar]
  27. Thompson, R. A Note on Restricted Maximum Likelihood Estimation with an Alternative Outlier Model. J. R. Stat. Society. Ser. B (Methodol.) 1985, 47, 53–55. [Google Scholar] [CrossRef]
  28. Filzmoser, P.; Reimann, C.; Garrett, R. A Multivariate Outlier Detection Method; Citeseer: University Park, PA, USA, 2004. [Google Scholar]
  29. Levin, D. The approximation power of moving least-squares. Math. Comput. 1998, 67, 1517–1532. [Google Scholar] [CrossRef]
  30. Brightman, N.; Fan, L. A Brief Overview of the Current State, Challenging Issues and Future Directions of Point Cloud Registration. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, X-3/W1-2022, 17–23. [Google Scholar] [CrossRef]
  31. Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  32. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  33. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, Cagliari, Sardinia, Italy, 26–28 June 2006; Volume 7. [Google Scholar]
  34. Massot, M.; Corpus, I. ivancorpus/pointcloud_web_viewer: Basic Release, 2024. Available online: https://zenodo.org/records/11081499 (accessed on 17 July 2024).
Figure 1. Point cloud creation in TVS2.
Figure 1. Point cloud creation in TVS2.
Entropy 26 00646 g001
Figure 2. Flowchart for 3D data processing and entropy reduction for reconstruction.
Figure 2. Flowchart for 3D data processing and entropy reduction for reconstruction.
Entropy 26 00646 g002
Figure 3. Laser positioner of TVS2.
Figure 3. Laser positioner of TVS2.
Entropy 26 00646 g003
Figure 4. Scanning aperture of TVS2.
Figure 4. Scanning aperture of TVS2.
Entropy 26 00646 g004
Figure 5. From point cloud to 3D print.
Figure 5. From point cloud to 3D print.
Entropy 26 00646 g005
Figure 6. Skull face scanned by TVS.
Figure 6. Skull face scanned by TVS.
Entropy 26 00646 g006
Table 1. Experimentation factors.
Table 1. Experimentation factors.
FactorLevel/Range Δ
Laboratory LightOFF (<1 Lux)-
Photosensor Voltage3.3 V-
Amplifier Resistance830 KΩ-
SA Rotating Speed300 rpm-
Depth Scanning Distance x55–85 cm35 cm
Table 2. Experiment settings for scanned foam objects.
Table 2. Experiment settings for scanned foam objects.
ObjectRotating Table Angle StepObtained FacesPosition of Rotating Table in TVS ( x , y , z ) in cmScanning
Window y × z
Letter A 90.0 ° 4POS1 (33,30,10) 13 ° × 15 °
Number 6 90.0 ° 4POS1 (33,30,10) 13 ° × 15 °
Skull 90.0 ° 4POS1 (33,30,10) and, POS2 (21,23,10) 13 ° × 15 °
Skull 45.0 ° 8POS1 (33,30,10) and, POS2 (21,23,10) 13 ° × 15 °
Skull 22.5 ° 16POS1 (33,30,10) and, POS2 (21,23,10) 13 ° × 15 °
Table 3. Standard deviation of face of foam letter A.
Table 3. Standard deviation of face of foam letter A.
Method/DatasetQuantity of 3D PointsStandard Deviation
Original18,4621.1100
Interquartile92320.2350
Thompson Tau α = 0.0117,2390.5475
Thompson Tau α = 0.114,0260.3751
Chi2 97.5%18,1160.6900
Chi2 90%17,9160.64200
Table 4. Approximate entropy calculated for each volume.
Table 4. Approximate entropy calculated for each volume.
ObjectPositionRotating Table
Angle Step
Obtained
Faces
Approximate
Entropy before
Filtering
Approximate
Entropy after
Filtering
Letter APos1 90.0 ° 40.3840.244
Number 6Pos1 90.0 ° 40.4490.252
SkullPos1 45.0 ° 80.4340.295
SkullPos1 90.0 ° 40.3570.267
SkullPos1 22.5 ° 161.1550.699
SkullPos2 45.0 ° 80.5910.362
SkullPos2 90.0 ° 40.6880.340
SkullPos2 22.5 ° 161.3910.591
Table 5. Shannon entropy calculated for each volume.
Table 5. Shannon entropy calculated for each volume.
ObjectPositionRotating Table
Angle Step
Obtained
Faces
Shannon Entropy
before Filtering
Shannon Entropy
after Filtering
Letter APos1 90.0 ° 43.5152.984
Number 6Pos1 90.0 ° 43.0072.878
SkullPos1 45.0 ° 84.3653.899
SkullPos1 90.0 ° 43.8503.162
SkullPos1 22.5 ° 164.8354.023
SkullPos2 45.0 ° 84.8713.846
SkullPos2 90.0 ° 43.5763.104
SkullPos2 22.5 ° 163.7773.255
Table 6. Measurement comparison of objects scanned in both locations from the TVS coordinate reference.
Table 6. Measurement comparison of objects scanned in both locations from the TVS coordinate reference.
ObjectPositionMeasured SectionTarget (cm)TVS Scanned
(cm)
Letter APOS1Bottom to top15.816.0
Letter APOS1Central aperture
(vertical)
4.24.5
Number 6POS1Bottom to top9.69.7
Number 6POS1Central aperture
(vertical)
2.22.2
SkullPOS1Center of eye to nose3.53.6
SkullPOS1Jaw to top13.814.0
SkullPOS2Center of eye to nose3.53.7
SkullPOS2Jaw to top13.814.3
Table 7. Fitness scores of volumes. Click over each image for better visualization on another window.
Table 7. Fitness scores of volumes. Click over each image for better visualization on another window.
ObjectPositionRotationFitness ScoreFlat 3D ImageExternal 3D View
Letter APos1 90.0 ° 0.961Entropy 26 00646 i001Entropy 26 00646 i002
Click for 3D View
Number 6Pos1 90.0 ° 0.919Entropy 26 00646 i003Entropy 26 00646 i004
Click for 3D View
SkullPos190.0°0.219Entropy 26 00646 i005Entropy 26 00646 i006
Click for 3D View
SkullPos145.0°0.853Entropy 26 00646 i007Entropy 26 00646 i008
Click for 3D View
SkullPos122.5°0.883Entropy 26 00646 i009Entropy 26 00646 i010
Click for 3D View
SkullPos290.0°0.118Entropy 26 00646 i011Entropy 26 00646 i012
Click for 3D View
SkullPos245.0°0.319Entropy 26 00646 i013Entropy 26 00646 i014
Click for 3D View
SkullPos222.5°0.716Entropy 26 00646 i015Entropy 26 00646 i016
Click for 3D View
Table 8. Fitness scores of faces normalized from a maximum of 50.6% and minimum of 0%.
Table 8. Fitness scores of faces normalized from a maximum of 50.6% and minimum of 0%.
Object Foam Skull
Position 1Position 2
RotationMaximum Fitness ScoreRotationMaximum Fitness Score
Rot1of160.543Rot1of160.260
Rot2of160.943Rot2of160.202
Rot3of161.000Rot3of160.159
Rot4of160.624Rot4of160.00
Rot5of160.604Rot5of160.481
Rot6of160.339Rot6of160.159
Rot7of160.000Rot7of160.000
Rot8of160.000Rot8of160.000
Rot9of160.000Rot9of160.000
Rot10of160.513Rot10of160.000
Rot11of160.549Rot11of160.000
Rot12of160.381Rot12of160.149
Rot13of160.876Rot13of160.206
Rot14of160.567Rot14of160.240
Rot15of160.000Rot15of160.503
Rot16of160.583Rot16of160.381
Rot1of80.523Rot1of80.130
Rot2of80.765Rot2of80.210
Rot3of80.824Rot3of80.015
Rot4of80.098Rot4of80.220
Rot5of80.448Rot5of80.080
Rot6of80.757Rot6of80.402
Rot7of80.947Rot7of80.341
Rot8of80.662Rot8of80.543
Rot1of40.543Rot1of40.341
Rot2of40.684Rot2of40.143
Rot3of4NARot3of40.108
Rot4of40.583Rot4of40.106
Table 9. Visual representation of objects’ faces used to calculate the fitness scores after spatial coordinate cloud processing. Red points correspond to the target, green points correspond to TVS measurements. Click over each image for better visualization on another window.
Table 9. Visual representation of objects’ faces used to calculate the fitness scores after spatial coordinate cloud processing. Red points correspond to the target, green points correspond to TVS measurements. Click over each image for better visualization on another window.
Object Foam Letter A
Entropy 26 00646 i017
LetterARot1of4
Entropy 26 00646 i018
LetterARot2of4
Entropy 26 00646 i019
LetterARot3of4
Entropy 26 00646 i020
LetterARot4of4
Entropy 26 00646 i021
Intentionally left blank.
Entropy 26 00646 i022
Intentionally left blank.
Object Foam Number 6
Entropy 26 00646 i023
Number6Rot1of4
Entropy 26 00646 i024
Number6Rot2of4
Entropy 26 00646 i025
Number6Rot3of4
Entropy 26 00646 i026
Number6Rot4of4
Entropy 26 00646 i027
Intentionally left blank.
Entropy 26 00646 i028
Intentionally left blank.
Object Foam skull
Entropy 26 00646 i029
SkullPOS1Rot1of16
Entropy 26 00646 i030
SkullPOS1Rot2of16
Entropy 26 00646 i031
SkullPOS1Rot3of16
Entropy 26 00646 i032
SkullPOS1Rot4of16
Entropy 26 00646 i033
SkullPOS1Rot5of16
Entropy 26 00646 i034
SkullPOS1Rot6of16
Entropy 26 00646 i035
SkullPOS1Rot7of16
Entropy 26 00646 i036
SkullPOS1Rot8of16
Entropy 26 00646 i037
SkullPOS1Rot9of16
Entropy 26 00646 i038
SkullPOS1Rot10of16
Entropy 26 00646 i039
SkullPOS1Rot11of16
Entropy 26 00646 i040
SkullPOS1Rot12of16
Entropy 26 00646 i041
SkullPOS1Rot13of16
Entropy 26 00646 i042
SkullPOS1Rot14of16
Entropy 26 00646 i043
SkullPOS1Rot15of16
Entropy 26 00646 i044
SkullPOS1Rot16of16
Entropy 26 00646 i045
SkullPOS1Rot1of8
Entropy 26 00646 i046
SkullPOS1Rot2of8
Entropy 26 00646 i047
SkullPOS1Rot3of8
Entropy 26 00646 i048
SkullPOS1Rot4of8
Entropy 26 00646 i049
SkullPOS1Rot5of8
Entropy 26 00646 i050
SkullPOS1Rot6of8
Entropy 26 00646 i051
SkullPOS1Rot7of8
Entropy 26 00646 i052
SkullPOS1Rot8of8
Entropy 26 00646 i053
SkullPOS1Rot1of4
Entropy 26 00646 i054
SkullPOS1Rot2of4
Entropy 26 00646 i055
SkullPOS1Rot3of4
Entropy 26 00646 i056
SkullPOS1Rot4of4
Entropy 26 00646 i057
SkullPOS2Rot1of16
Entropy 26 00646 i058
SkullPOS2Rot2of16
Entropy 26 00646 i059
SkullPOS2Rot3of16
Entropy 26 00646 i060
SkullPOS2Rot4of16
Entropy 26 00646 i061
SkullPOS2Rot5of16
Entropy 26 00646 i062
SkullPOS2Rot6of16
Entropy 26 00646 i063
SkullPOS2Rot7of16
Entropy 26 00646 i064
SkullPOS2Rot8of16
Entropy 26 00646 i065
SkullPOS2Rot9of16
Entropy 26 00646 i066
SkullPOS2Rot10of16
Entropy 26 00646 i067
SkullPOS2Rot11of16
Entropy 26 00646 i068
SkullPOS2Rot12of16
Entropy 26 00646 i069
SkullPOS2Rot13of16
Entropy 26 00646 i070
SkullPOS2Rot14of16
Entropy 26 00646 i071
SkullPOS2Rot15of16
Entropy 26 00646 i072
SkullPOS2Rot16of16
Entropy 26 00646 i073
SkullPOS2Rot1of8
Entropy 26 00646 i074
SkullPOS2Rot2of8
Entropy 26 00646 i075
SkullPOS2Rot3of8
Entropy 26 00646 i076
SkullPOS2Rot4of8
Entropy 26 00646 i077
SkullPOS2Rot5of8
Entropy 26 00646 i078
SkullPOS2Rot6of8
Entropy 26 00646 i079
SkullPOS2Rot7of8
Entropy 26 00646 i080
SkullPOS2Rot8of8
Entropy 26 00646 i081
SkullPOS2Rot1of4
Entropy 26 00646 i082
SkullPOS2Rot2of4
Entropy 26 00646 i083
SkullPOS2Rot3of4
Entropy 26 00646 i084
SkullPOS2Rot4of4
Entropy 26 00646 i085
Intentionally left blank.
Entropy 26 00646 i086
Intentionally left blank.
Entropy 26 00646 i087
Intentionally left blank.
Entropy 26 00646 i088
Intentionally left blank.
Table 10. Fitness score of faces of foam Letter A and Number 6 and at POS1.
Table 10. Fitness score of faces of foam Letter A and Number 6 and at POS1.
Object
Letter A at Pos1Number 6 at Pos1
RotationMaximum Fitness ScoreRotationMaximum Fitness Score
Rot1of40.913Rot1of40.750
Rot2of40.343Rot2of40.263
Rot3of40.887Rot3of40.765
Rot4of40.440Rot4of40.360
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alba Corpus, I.Y.; Flores-Fuentes, W.; Sergiyenko, O.; Rodríguez-Quiñonez, J.C.; Miranda-Vega, J.E.; Garcia-González, W.; Núñez-López, J.A. 3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System. Entropy 2024, 26, 646. https://doi.org/10.3390/e26080646

AMA Style

Alba Corpus IY, Flores-Fuentes W, Sergiyenko O, Rodríguez-Quiñonez JC, Miranda-Vega JE, Garcia-González W, Núñez-López JA. 3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System. Entropy. 2024; 26(8):646. https://doi.org/10.3390/e26080646

Chicago/Turabian Style

Alba Corpus, Ivan Y., Wendy Flores-Fuentes, Oleg Sergiyenko, Julio C. Rodríguez-Quiñonez, Jesús E. Miranda-Vega, Wendy Garcia-González, and José A. Núñez-López. 2024. "3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System" Entropy 26, no. 8: 646. https://doi.org/10.3390/e26080646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop