Next Article in Journal
CEEMDAN-SVD Motor Noise Reduction Method and Application Based on Underwater Glider Noise Characteristics
Previous Article in Journal
Improved Estimator Using Auxiliary Information in Adaptive Cluster Sampling with Networks Selected Without Replacement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Accuracy of Smartphone-Based Photogrammetry and Videogrammetry in Facial Asymmetry Measurement

by
Luiz Carlos Teixeira Coelho
1,2,
Matheus Ferreira Coelho Pinho
1,
Flávia Martinez de Carvalho
3,4,
Ana Luiza Meneguci Moreira Franco
3,4,
Omar C. Quispe-Enriquez
2,
Francisco Airasca Altónaga
1 and
José Luis Lerma
2,*
1
Photogrammetry and Remote Sensing Laboratory (Laboratório de Fotogrametria e Sensoriamento Remoto—LFSR), School of Engineering, Rio de Janeiro State University, Rua São Francisco Xavier 524, PJLF Sala 4044F, Maracanã, Rio de Janeiro 20550-013, RJ, Brazil
2
Photogrammetry and Laser Scanner Research Group (GIFLE), Department of Cartographic Engineering, Geodesy and Photogrammetry, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
3
Laboratory of Epidemiology of Congenital Malformations (LEMC—Laboratório de Epidemiologia das Malformações Congênitas), Instituto Oswaldo Cruz, Avenida Brasil 4365 LEMC, Manguinhos, Rio de Janeiro 21040-360, RJ, Brazil
4
Post-Graduation Programme in Biological Sciences (Genetics), Rua Professor Rodolpho Paulo Rocco, s/n, Prédio do CCS-Bloco A-2, Andar-Sala 099, Federal University of Rio de Janeiro Ilha do Fundão, Cidade Universitária, Rio de Janeiro 21941-617, RJ, Brazil
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(3), 376; https://doi.org/10.3390/sym17030376
Submission received: 6 February 2025 / Revised: 22 February 2025 / Accepted: 26 February 2025 / Published: 1 March 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Computer Vision and Graphics)

Abstract

:
Facial asymmetry presents a significant challenge for health practitioners, including physicians, dentists, and physical therapists. Manual measurements often lack the precision needed for accurate assessments, highlighting the appeal of imaging technologies like structured light scanners and photogrammetric systems. However, high-end commercial systems remain cost prohibitive, especially for public health services in developing countries. This study aims to evaluate cell-phone-based photogrammetric methods for generating 3D facial models to detect facial asymmetries. For this purpose, 15 patients had their faces scanned with the ACADEMIA 50 3D scanner, as well as with cell phone images and videos using photogrammetry and videogrammetry, resulting in 3D facial models. Each 3D model (coming from a 3D scanner, photogrammetry, and videogrammetry) was half-mirrored to analyze dissimilarities between the two ideally symmetric face sides using Hausdorff distances between the two half-meshes. These distances were statistically analyzed through various measures and hypothesis tests. The results indicate that, in most cases, both photogrammetric and videogrammetric approaches are as reliable as 3D scanning for detecting facial asymmetries. The benefits and limitations of using images, videos, and 3D scanning are also presented.

1. Introduction

Measuring facial asymmetries is vital in managing patients with craniofacial anomalies in clinical practice with applicability in public health [1,2]. A harmonious face, from the perspective of symmetry, not only influences the perception of beauty but also serves as an indicator of underlying health [3]. In dysmorphology, the accurate identification of craniofacial asymmetries is essential for diagnosing syndromes and deformities, allowing for early interventions and better therapeutic results [4]. In the context of public health, understanding the prevalence and impact of these asymmetries helps experts in the field develop policies and programs aimed at preventing and treating these conditions, which are often linked to genetic and environmental factors [1]. The early detection and correction of craniofacial asymmetries, when possible, can significantly improve self-esteem and quality of life, as it reduces co-morbidities in the medium and long term, highlighting the importance of these measures for a comprehensive approach to health [5,6].
The analysis of craniofacial asymmetries has various applications in medicine, dentistry, and public health, ranging from the diagnosis of anomalies such as plagiocephaly and cleft lip to surgical planning and aesthetic rehabilitation with customized prostheses [7]. It is also used in forensic facial reconstruction for human identification [8]. Facial asymmetry has a positive correlation with increasing age [9]. The same source also shows that the middle and lower thirds of the face suffer greater impact. This probably occurs due to a complex effect of gravity, bone resorption, decreased tissue elasticity, and subcutaneous fullness [10]. Facial asymmetries may also differ according to gender [11]; in a sample of young adults (18–25 years) with self-reported European ancestry, we observed greater variation in male faces than in female faces for all measurements taken.
The state of the art in 3D craniofacial photogrammetric/videogrammetric reconstruction systems has advanced significantly in recent years, incorporating technologies such as machine learning and deep neural networks to improve accuracy and efficiency in creating detailed three-dimensional (3D) models [12,13,14,15]. These systems use techniques such as point clouds and photographic images to generate accurate representations of the skull and face and are widely used in areas such as medicine, dentistry, and forensic anthropology [16]. However, the costs involved in implementing these technologies can be high, including the purchase of specialized equipment, advanced software, and the need for qualified personnel to operate and interpret the data generated [17]. Despite these challenges, the benefits in terms of diagnostic accuracy and treatment personalization justify investment in effective approaches for the identification of head dysmorphologies. For example, 3D craniofacial photogrammetry can also work as a tool for better phenotyping. For example, in the case of orofacial clefts, it can help in the identification of subclinical phenotype and assist genetic studies [18].
Recently, smartphone-based photogrammetry has proven to be an effective low-cost solution for obtaining accurate measurements of patient's head [19,20]. Photogrammetric solutions for head measurements may use stickers or marks drawn with a makeup pencil. They may also include a coded cap, which compresses the patient's hair, revealing the shape of the cranium more accurately. Assessing facial asymmetries through photogrammetric methods, however, presents some challenges concerning the quality of 3D models produced through photogrammetry. While it has been demonstrated that such solutions can generate 3D models with sufficient accuracy for cranial measurements and validations [19,20,21,22], faces are not entirely static due to involuntary movements. In fact, previous attempts adding machine learning facial landmarks failed to be as accurate as coded markers, downgrading the quality of the eventual 3D models [23]. Commercial photogrammetric systems for capturing facial data typically utilize synchronized arrays of cameras, capturing multiple images simultaneously [24]. This setup is, however, impractical when adapting the same algorithms for usage on cell phones shooting with a single camera.
Fully covering a person's head with coded fabric or stickers is impractical. As a result, algorithms for accurately measuring faces and generating 3D models from photogrammetry must leverage natural facial landmarks that provide high-contrast features, making them suitable for the automatic detection of homologous features. Nevertheless, certain regions of the face, such as the chin and forehead, may lack sufficient contrast, potentially leading to increased errors in photo alignment or orientation. It is also worth noting that every face exhibits a degree of asymmetry, with only particularly extreme cases being medically classified as pathological [25]. Thus, a certain level of measurement uncertainty can be acceptable when implementing a cell phone photogrammetric solution for facial dysmorphologies.
Videogrammetry is also considered a potential alternative to single-shot photographs for two main reasons. First, most cell phones stabilize the focal distance and maintain it consistently while recording videos [26], ensuring uniformity and guaranteeing a more robust camera geometry. Second, videogrammetry reduces the need for cumbersome movements when capturing individual images, which can, otherwise, make it difficult to produce high-quality overlapping images.
This study aims to explore the potential of using low-cost, cell phone-based photogrammetric techniques to capture, process, and align images of a person's head, generating a 3D model that is accurate enough for measuring facial asymmetries. Such 3D models might have a significant impact on various medical applications. By comparing the 3D models generated from images, videos, and 3D scanning and applying statistical measures to evaluate their similarity, this research seeks to determine whether these 3D models can serve as a viable, cost-effective alternative to expensive 3D imaging medical solutions. Moreover, this study proposes an innovative and accessible approach based on mobile phone photogrammetry, eliminating the need for specialized hardware for data acquisition and optimizing the detection of natural facial features to enhance the accuracy of 3D reconstruction, offering a viable and cost-effective alternative to traditional solutions, which could be used in lieu of expensive technology, such as 3D scanners or advanced photogrammetric systems.

2. Materials and Methods

The workflow comprised the following tasks:
  • Data collection, which comprises 3D scanning each individual's face, and then taking a series of photos and one single short video of that same patient;
  • Data processing, which includes photogrammetric processing of images and video frames, mesh trimming and registration, separation of meshes as two halves, mirroring of the left half according to a reference plan, and subsequent calculation of Hausdorff distances for them;
  • Statistical analysis, which includes comparing Hausdorff distances for the two halves of the same model and comparing statistical measures. This analysis was undertaken by applying statistical coefficients and hypothesis tests.
Figure 1 provides a general scheme for this workflow.
Next, the materials used, the methods employed, and the profile of the volunteer patients participating in the project will be outlined.

2.1. Materials

2.1.1. Orientation Marks

To establish a set of reference points on the subject's face, small adhesive markers were placed on selected facial landmarks, similar to [22] for data collection. Each marker features a unique, non-repetitive pattern, creating a structured framework that allows the software to accurately recognize and align the sequence of images during data processing. In addition, circular retro-reflective marks used for the 3D scanner orientation were placed on the cap and face.

2.1.2. Cell Phone Data Acquisition (Static Images and Video)

The smartphone used for image and video captures is considered a reliable mobile device due to its advanced features, including a fast processor, high-quality camera, and built-in image stabilization, which enable the capture of high-resolution images (4000 × 3000 pixels) and video images (1920 × 1080 pixels) at a refresh rate of 120 Hz and up to 60 frames per second. Both were captured with the rear-wide angle camera.
The device's performance, detailed in Table 1 [27], meets the specific requirements of the project. By replacing a traditional 3D scanner with a cell phone, which is available at a fraction of the cost, the project opts for a low-cost solution. It is also crucial to highlight the importance of software–hardware compatibility.

2.1.3. Three-Dimensional Scanner

Academia 50 is a professional-grade 3D digitization tool developed by Creaform Inc. (Lévis, QC, Canada) that allows the user to achieve precise and reliable results at a vast number of engineering sectors. For this study, this portable device, based on white light technology, acts as a reliable data collection approach. Its technical specifications set a threshold accuracy of 0.250 mm and resolution of 0.250 mm [28]. VXElements was used as the platform for running the ACADEMIA 50 Scanner. It provides self-positioning with targets, incorporates geometry and texture, with texture mapping and target filing, and improves precision through contour optimization [29].
After calibrating, the Academia 50 3D scanner is ready to create registered meshes. For each scanned patient, it produces a mesh with texture using its integrated camera. A structured light source is located at the top, as seen in Figure 2. In the lower central section is the camera. In the lower right and left sections are the other two structured white light sources.

2.1.4. Photogrammetry/Videogrammetry

Agisoft Metashape Professional 1.7, developed by Agisoft LLC, is a well-known software that serves as a tool for creating 3D spatial data based on the photogrammetric processing of digital images. It stands out, compared to similar software, for its ability to process considerable amounts of information in a fast and practical manner [30].
For the photogrammetric data collection, three strips were set to cover each patient's face. A conventional workflow was used in Agisoft Metashape to achieve 3D models from both the camera images (photogrammetry) and the video recording (videogrammetry), ranging from data alignment (medium quality) (Figure 3), dense point cloud (medium quality), and 3D modelling (high quality) without extrapolation and texturing. For videogrammetry, the video recording was uploaded, and an image extraction of 0.5 s was set.
Agisoft Metashape utilizes the robust Structure from Motion (SfM) algorithm [31]. Both images and video frames were processed in an equal manner, initially by importing all images and video frames for alignment/orientation. During the initial alignment/orientation in Agisoft Metashape, the “Generic Preselection” option was applied, resulting in a sparse point cloud. For generating the dense point cloud, medium accuracy settings were used, with a maximum of one million key points and tie points per photo. The point cloud was, afterwards, filtered for confidence (only points greater than 2). At the end of the process, a 3D mesh with texture, corresponding to the objects represented in the images was generated.

2.1.5. CloudCompare, Blender, and MeshLab

CloudCompare and MeshLab manage to solve the necessity of processing software for triangular meshes [32,33]. The former, developed during a collaboration of Telecom ParisTech and EDF, and the latter, developed by the ISTI-CNR, are both free open-source software that are employed indistinctly for the operations requiring mesh processing and management. Blender is a powerful free and open-source 3D computer graphics creation suite that is used not only for mesh manipulation but also for animation and 3D modeling [34]. The three computer graphics tools were used for several steps of the proposed methodology: Cloud Compare for trimming and registering meshes coming from all three acquisition methods; Blender for cutting face meshes in two halves according to specific facial landmarks; and finally, Meshlab for calculating Hausdorff distances, processing distance heatmaps, and providing statistical measures.

2.1.6. Patients

Fifteen volunteers acting as prospect patients were scanned for testing the image-based facial assessment approaches, comprising seven women and eight men—all of them cisgender. Out of the volunteers, five were born in South America, one in China, and nine in Europe. Two out of the South Americans and all Europeans were Caucasian, the other three South Americans were mixed, and the Chinese national was Asian. Age spans varied from 22 to 43 years old, with a median value of 24. All signed the informed consent form that was pre-approved as part of this research, thus following the university's required protocol for experiments with humans.

2.2. Methods

This section provides a comprehensive breakdown of each step undertaken in alignment with the general framework illustrated in Figure 1. It elaborates on the systematic acquisition of 3D point clouds, images, and videos, followed by the detailed processing pipeline. This includes the segmentation of meshes into two halves, the computation of Hausdorff distances, and the subsequent statistical analysis of the results. Each phase is described to ensure the clarity and reproducibility of the methodology.

2.2.1. Data Collection

All fifteen patients in their twenties, thirties, and forties, after signing a consent form, were instructed to wear the coded cap with eight stickers attached to their faces, as shown in Figure 4.
A more thorough definition of the chosen landmarks, according to [35,36], would be as follows:
  • Nasion: the anatomical point located at the midline of the skull, where the frontal bone and the two nasal bones intersect;
  • Tragus: the cartilaginous projection located in front of the external ear canal;
  • Zygion: the most lateral point on the zygomatic arch, a prominent bony structure on the side of the skull;
  • Gonion: the most lateral, inferior point on the angle of the mandible;
  • Pogonion: the most forward-projecting point on the anterior surface of the mandible.
The placement of the five additional stickers served two primary purposes. First, it aimed to replicate the distribution of Control Points (CPs) in a photogrammetric block by covering the edges of the surface to be imaged, thereby ensuring a more robust bundle block adjustment. Additionally, stickers with unique markings assist algorithms in identifying homologous points in regions of the face where the skin is smooth and lacks contrast. Conversely, areas near the lips and eyes naturally provide sufficient contrast, offering plenty of identifiable homologous points in stereoscopic images, making additional stickers unnecessary in those regions. Therefore, the proposed arrangement was considered sufficient to provide a robust set of easily identifiable homologous points for photogrammetric processing.
Additional round stickers were primarily applied to the coded cap to enhance the alignment functionality of the 3D scanner. Their purpose was to improve the quality of the mesh generated by the scanner but also served as additional high-contrast marks for photogrammetric processing.
Once fully equipped, each patient was invited to sit in a chair and stand still for data collection (Figure 5). Firstly, their faces and necks were scanned using the Academia 50 3D scanner, which was calibrated at the beginning of each scanning session. Afterwards, cell phone pictures were taken from different angles around their face and neck. Later, a single-shot video was taken, following a pre-determined path around each patient, similar to the cell phone pictures.
For both images and videos, a standardized protocol was established to ensure consistency across patients. A minimum of forty-five images was required for each subject: fifteen following a semicircular path around the upper part of the head, fifteen along a semicircular path around the individual's head, and fifteen more along a semicircular path around the individual's jaw and neck (Figure 6). The video followed the same path as the images, captured continuously without interruption. All cell phone images and videos were taken at a resolution of 4000 × 3000 pixels for static photos and 1920 × 1080 pixels for video frames, avoiding the highest resolutions supported by the smartphone used for this experiment: 4 K and 8 K.
Table 2 summarizes the key data collected for each patient. For video processing, the frames were extracted at a rate of three frames per second, resulting in a number of photos three times the length of each video in seconds. The procedure went smoothly for thirteen out of the fifteen patients, with scanning times ranging from four to seven minutes. However, Patient 11's voluminous beard posed challenges for the scanner (though not for the image and video processing), even with round markers placed in various areas of the beard. Additionally, Patient 8 exhibited significant involuntary facial movements, leading to some artifacts in the scanned model and a few blurred images and video frames (Figure 7).

2.2.2. Data Processing

This process resulted in three 3D models per volunteer: a scanner-generated mesh, a mesh derived from photogrammetry, and another from videogrammetry. Each mesh was then trimmed in CloudCompare to retain only the region corresponding roughly to the person's face. Since the photogrammetric meshes were referenced to arbitrary coordinate systems, they were registered to the scanner mesh by applying a scale transformation, ensuring that all three meshes for each subject shared a consistent coordinate system (Figure 8).
Once all meshes were properly trimmed and registered, Hausdorff distances were calculated as a potential measure of asymmetry between the two halves of a face. The Hausdorff distance is a measure of the maximum of the closest in two sets of points, commonly used to assess shape similarity by measuring errors in creating a triangular mesh to approximate a surface [37,38]. There are alternatives for calculating distances between objects, such as Root Mean Squared Error (RMSE), Chamfer distance [39], and Procrustes analysis [40]. However, the Hausdorff distance was chosen because it serves as a measure of dissimilarity between two sets of points, hence being a useful computer graphics tool for determining degrees of asymmetry between two halves of the same face. Ref. [41] provides the following definition:
Let A , B R n . The unilateral Hausdorff distance between A and B is calculated as follows:
d A B = s u p x A i n f y B | x y |
And the Hausdorff distance is defined by the following:
d H ( A , B ) = m a x d A B , d B A
For the calculation of Hausdorff distances, all meshes were cut into two halves with a vertical plane. The calculation was carried out using Meshlab 2023.12 software and its built-in Hausdorff distance algorithm [42]. According to [43], ideally, the top of the head and the center of the chin would be used. However, since all volunteers wore the cap, their hairline and part of their forehead were covered. Thus, the references used for splitting faces into two halves were the pogonion, pronasale, and glabella, as shown in (Figure 9). This step was performed using Blender 4.2 software [34] with native algorithms.
The second stage involved reflecting the left, mirroring it according to the vertical plane and then contrasting both for dissimilarity comparisons. Finally, the Hausdorff distance was calculated for each pair of face halves with a graphical representation of the shortest distances between two points of the meshes as a heatmap (Figure 10). This step was also entirely developed using Meshlab 2023.12 software [42].
These procedures, as described above, produce datasets composed of heatmaps describing which areas of the individual's face are more asymmetrical than their equivalent on the other side. These heatmaps were then converted into a series of histograms containing frequencies of each distance and heatmaps on the specific dissimilar areas. For each histogram, statistics such as mean, median, and standard deviation were also calculated.
In both cases, the approach included analyzing their linear association through the Pearson Product–Moment Correlation and conducting hypothesis tests on the mean, median, and variance of such histograms.

2.2.3. Statistical Analysis

The Pearson Product–Moment Correlation can be understood as a measure of the degree of linear relationship between two random variables. Therefore, the correlation coefficient emphasizes predicting the degree of dependence between two random variables [44,45]. Its estimator is defined by the following:
c o r r = c o v x y s x s y
where
c o v x y is the sample covariance;
s x is the sample standard variation for the independent variable (in other words, variable x);
s y is the sample standard variation for the dependent variable (in other words, variable y as a function of x: y = f ( x ) ).
A strong positive correlation, reflecting a high degree of dependency between variables (and their similarity), results in values close to 1. Values near zero suggest little to no linear relationship between the variables. Negative values, down to −1, indicate an inverse relationship, meaning that as one variable increases, the other decreases. For this procedure, Pearson's Product–Moment Correlation coefficient was computed to evaluate pairwise correlations between the means of Hausdorff distances of meshes generated via photogrammetry, videogrammetry, and 3D scanning. High correlations were anticipated under the assumption that comparable measurement techniques would yield consistent results.
The Paired t-Test is employed to compute mean differences when observations from two populations of interest are collected in pairs [46,47,48]. This test examines the differences between two observations from the same subject taken under similar conditions. It is a specific instance of the two-sample t-Test, which applies to samples with unknown population means and standard deviations but is assumed to follow a normal distribution. It produces a T-Stat that must be smaller than the T-Critical. It also produces a p-value which must be larger than the alpha value and reflects the probability of obtaining the observed results, assuming that the null hypothesis is true. In this scenario, the analysis was conducted to determine whether statistically significant differences exist between the means of Hausdorff distances of meshes produced by photogrammetry, videogrammetry, and 3D scanning, evaluated pairwise.
When testing means of two groups, for similarity, its statistic can be simplified as follows:
T = D ¯ S D n
The hypotheses are formulated as follows:
H 0 : μ 1 = μ 2 ;
H 1 : μ 1 μ 2 ;
H 1 : μ 1 > μ 2 ;
H 1 : μ 1 < μ 2 ,
where
μ 1 and μ 2 are the hypothetized means of the two paired groups;
D ¯ is the mean of the differences between the two samples;
S D is the standard deviation of the differences between the two samples;
n refers to the size of the sample.
The Repeated Measures Analysis of Variance (rANOVA) is employed to determine if there is a statistically significant difference between the means of three or more groups related to measures taken from the same subjects [49,50,51]. It is, therefore, an extension of the Paired t-Test. Similarly, it yields a F-Stat which must be smaller than the F-Critical, and a p-value that must be larger than the alpha value. Therefore, it complements the Paired t-Test in that it helps determine whether statistically significant differences exist between the means of Hausdorff distances of meshes produced by photogrammetry, videogrammetry, and 3D scanning, evaluated together as a group. Its statistic is defined by the following:
Its statistic is given below:
F = M S g r o u p M S e r r o r
The hypotheses are formulated as follows:
H 0 : μ 1 = μ 2 = μ 3 = = μ k ;
H 1 : At least two hypothesized means are statistically different,
where
M S g r o u p is the mean squared error of between-group variance;
M S e r r o r is the mean squared error of within-group variance;
The Wilcoxon–Mann–Whitney (or simply Mann–Whitney) test may be used when two samples are different from the normal distribution but have similar distribution shapes and variances. It is especially useful for assessing the medians of two groups. For this test, the U-Stat must be larger than the U-Critical. For this study, it was employed to determine whether statistically significant differences exist between the medians of Hausdorff distances of meshes produced by photogrammetry, videogrammetry, and 3D scanning, evaluated pairwise. The test uses the ranks of measurements in the following manner [52]:
N = n 1 + n 2
U 1 = n 1 n 2 + n 1 ( n 1 + 1 ) 2 R 1
U 2 = n 2 n 1 + n 2 ( n 2 + 1 ) 2 R 2
U = m i n ( U 1 , U 2 )
The hypotheses are formulated as follows:
H 0 : The hypothetised measures are not statistically different;
H 1 : The hypothetised measures are statistically different,
where
n 1 and n 2 are the number of observations in samples 1 and 2;
R 1 and R 2 are the sum of the ranks of the observations in samples 1 and 2.
Finally, the Kruskal–Wallis test is, in a certain way, a non-parametric equivalent to ANOVA. According to [53], it determines if independent groups have the same mean on ranks, instead of the data themselves. For that reason, it may be used to assess medians of more than two samples. Parallel to the Mann–Whitney test, it was suggested to verify whether statistically significant differences exist between the medians of Hausdorff distances of meshes produced by photogrammetry, videogrammetry, and 3D scanning, this time, evaluated all at once. It requires a sample size of 5 or more and provides a H-Stat and a p-value that must be larger than the alpha value. Its statistic is given by the following:
H = 12 N ( N + 1 ) i = 1 k R i 2 n i 3 ( N + 1 )
The hypotheses are formulated as follows:
H 0 : The hypothesized measures are not statistically different;
H 1 : At least two hypothesized measures are statistically different,
where
n i is the size of sample i;
N is the total sample size;
k is the number of groups being compared;
R i is the sum of the ranks of the observations in sample i.
The four aforementioned tests, in that regard, extensively compare meshes of the same subject in pairs and as a group. They provide statistically significant answers that help evaluate if meshes obtained from photogrammetry and videogrammetry are just as effective as meshes obtained from the 3D scanner counterpart—but with a special emphasis on determining facial asymmetries among two face halves.

3. Results

3.1. Histograms and Measures of Central Tendency

For all subjects, the spatial distribution of Hausdorff distances was graphically rendered (as shown, for example, in Figure 11).
The initial visual analysis indicates that, in certain regions of each patient's face, the models show notable agreement, though some discrepancies appear, particularly in the models derived from videogrammetry. Since these differences are within the millimeter range, histogram analyses offer a clearer depiction of each model's performance in identifying asymmetries. These histograms are presented in Figure 12.
Overall, the histograms demonstrate similar shapes and widths across the models. For patients with more pronounced asymmetries, such as Patients 3, 4, 6, and 10, the histograms appear broader, reflecting greater variation. Conversely, for more symmetrical patients, such as Patients 2, 7, and 14, the histograms are narrower. While the general shapes align across sensors, some variations occur, with one sensor occasionally showing multiple peaks compared to others. This pattern is particularly evident for Patient 8, whose histogram displays multiple peaks at various frequencies, likely attributable to involuntary facial movements, as previously discussed.
In order to develop a more quantitative analysis, and determine if such small variations are statistically significant, the following measures of central tendency were calculated for each patient: the mean, the median, and the standard deviation. They are summarized in Table 3 and in Figure 13.
The data suggest that, for most patients, the means and medians are quite consistent across the three sensors, with a discrepancy of 1 mm or less between these measures. However, Patient 8 exhibits a more notable variation, particularly between the means and medians from the videogrammetry and 3D scanner histograms, which suggests potential data collection limitations that may affect this study's reliability for this patient.
Moreover, patients with more asymmetrical facial structures tend to have a higher standard deviation, as observed in the histograms, whilst more symmetrical patients exhibit a notably lower standard deviation.
To deepen the analysis, hypothesis tests were performed to assess whether the datasets from the three sensors differed significantly from each other. This approach aims to clarify whether any observed differences are statistically meaningful.

3.2. Hypothesis Tests

Three two-tailed paired t-tests were performed at a 95% confidence level to compare the mean values of the histograms obtained from each method: one test compared photogrammetry with 3D scanning, a second compared videogrammetry with 3D scanning, and a third compared photogrammetry with videogrammetry.
The hypotheses for these tests were as follows:
H 0 : The means are not statistically different.
H 1 : The means are statistically different.
The outcomes for these tests are presented in Table 4.
The results indicate that all three pairs of datasets exhibit a relatively high linear correlation, suggesting that the mean values of the histograms for the fifteen patients increase or decrease in a similar pattern. The Paired t-Test results do not provide strong evidence against the null hypothesis. p-values have also been considerably high, especially when comparing photogrammetry with 3D scanner data. Therefore, at the 95% confidence level, we cannot conclude that there is a statistically significant difference between the means.
Additionally, a repeated measures MANOVA (rMANOVA) at a confidence level of 95% was conducted to compare the mean histogram values obtained from each method simultaneously. The hypothesis for this test were as follows:
H 0 : The means for all three sets are not statistically different.
H 1 : At least two of the means are statistically different.
A summary of the results is presented in Table 5.
Consistent with the Paired t-Test results, the rMANOVA does not indicate a statistically significant difference between the means across methods, even when analyzed simultaneously.
For the medians, two types of non-parametric tests were conducted. Firstly, a Wilcoxon Mann–Whitney test at a 95% confidence level was used to compare the medians of the histograms obtained from each method following the same pairs used for the Paired t-Tests.
The hypotheses for these tests were as follows:
H 0 : The medians are not statistically different.
H 1 : The medians are statistically different.
Parallel with these paired tests, the three sets of medians went through a Kruskal–Wallis test—also at a 95% confidence level—with the following hypotheses:
H 0 : The medians for all three sets are not statistically different.
H 1 : At least two of the medians are statistically different.
Table 6 and Table 7 summarize the main findings of both tests.
The results from both types of tests indicate no statistically significant difference between the medians across methods, regardless of whether they are analyzed in pairs or as a group.
Finally, another rMANOVA test was conducted, this time among the variances for the three histograms. It also fails to reject the null hypothesis, thus not able to determine any significant difference between variances for each subject in the three datasets (Table 8).

3.3. Hypothesis Tests After Removing Patient 8

Due to Patient 8's tendency for involuntary facial movements, which contributed to greater variation in means and medians, the analyses were repeated with Patient 8's data excluded. The updated statistics are presented in Table 9, Table 10, Table 11, Table 12 and Table 13. In the following tables, Pearson's correlation coefficients and p-values with an increase are shown in bold. Overall, such metrics generally increased, with a single exception (the p-value for the “photogrammetry versus 3D scanner” Paired t-Test for the means). This suggests that the results are even more robust without Patient 8's data.

4. Discussion

The present study considered a 3D white scanner as reference data to compare the cell phone photogrammetry and cell phone videogrammetry approaches. The ACADEMIA 50 3D scanner is a qualified scanner, but not a high-end system of its class. In fact, in the literature, stationary scanners, such as the Danae 100SP, 3dMDface, and Vectra M3, are considered to yield real ground-truth datasets [54]. Nevertheless, this portable 3D scanner yielded the best results in 8 out of 15 patients (cf. Table 3 and Figure 13); similarly, videogrammetry worked best in 5 out of 15 patients, and photogrammetry in 3 out of 15 patients. Inversely, the portable 3D scanner also worked worst in 4 out of 15 patients, due to involuntary facial movements, as pointed out in [54]; photogrammetry in 3 out of 15; and videogrammetry in 2 out of 15.
The results presented herein with photogrammetry and videogrammetry worked well due to the reference frame set by the coded cap and stickers, as reported in [19]. The coded cap helps to minimize errors when automatically locating homologous points on the patient's head, which often has similar textures and patterns. Moreover, the success of videogrammetry in producing accurate models under certain circumstances can be attributed to unique features of specific patients, which may mirror real-world conditions. For instance, Patients 6 and 11 had nose rings, which posed challenges for the scanner. In the case of Patient 4, data collection appeared to proceed smoothly, yet flaws were observed in the nose region of the scanned model, suggesting possible involuntary movements during scanning. Similarly, Patient 15 exhibited similar issues. Additionally, Patient 11 had a thick beard that the scanner failed to capture effectively, as previously noted.
In each of these cases, videogrammetry produced superior results, yielding smoother models with fewer vertices while preserving essential facial features. Photogrammetry outperformed the other methods in only two instances. For Patient 13, the results were comparable to videogrammetry, and Patient 10 displayed a clear advantage with photogrammetry. However, photogrammetry models were generally rougher, with minor bumps on the cheeks—likely due to image misalignment and residual errors in the point cloud, despite adherence to a strict image processing protocol. Another factor in videogrammetry's superior performance could be the stable image capture and consistent focal distance maintained throughout video acquisition, which supports robust photogrammetric processing.
Statistically, all three modeling techniques were comparable in detecting facial asymmetries. This suggests that even models derived from cell phone photogrammetry may offer sufficient accuracy for the objectives of this study.
Patient 8 was the only case where the results varied significantly across the three models, likely due to the patient's pronounced facial movements, noticeable even during data collection. Excluding this patient's data further improved statistical reliability. These findings highlight opportunities for refining photogrammetric processing of cell phone images in this specific context.
The initial methodology for this study focused on using the Hausdorff distance as a measure to evaluate whether state-of-the-art photogrammetric/videogrammetric approaches could accurately capture the level of dissimilarity between the two halves of a patient's face. While not intended as a definitive metric for assessing facial asymmetry, the Hausdorff distance provided an initial benchmark to determine if cell phone-based photogrammetric methods could approximate the accuracy of 3D scanning in detecting asymmetries. This metric allowed us to achieve a direct comparison across different sensors.
Herein, the cell phone was used to carry out stereophotogrammetry with great flexibility. It is substantially faster (5 to 13 times) than portable 3D scanning (Table 2). Videogrammetry, taking the same capture time as photogrammetry or slightly shorter, seems to be the most promising approach to recording facial asymmetries in adults, after portable 3D scanning, outperforming this latter technology for patients with involuntary facial movements. This statement will be further developed with higher resolution video recording, selecting 4 K or 8 K image recording.
Also, it is crucial to consider that the proper registration of models derived from state-of-the-art photogrammetric and videogrammetric approaches is a key factor. Commercial solutions may produce 3D models with varying coordinate references and scale factors. In this study, model registration was performed manually, potentially introducing additional errors, even though the final analyses were reasonably accurate. This issue is particularly evident in the Hausdorff distance analysis, where mismatches between face halves may have resulted in poorer results, especially around the edges where the meshes did not fully align. Therefore, developing a solution that ensures precise model registration and references to create a consistent metric system is a primary objective for this project.
The scale factor that might affect the photogrammetric and videogrammetric performance has been omitted in the research presented herein. Further research is needed to understand how this methodology applies across different cell phone models, which may produce meshes with varying orientations and scale factors [20].
What is clear, however, is the viability of using cell phone photogrammetric and videogrammetric 3D models for measuring facial asymmetries in old patients (not newborns). The hypothesis tests that were applied strongly and unequivocally suggest that 3D models obtained from cell phone photogrammetry and videogrammetry have proven to be similar to those produced by portable 3D scanners, making them a cost-effective alternative for medical practitioners and the public healthcare system, that may not opt for high-end synchronized multi-camera or multi-scanners systems.
Also, to measure facial asymmetry, the choice between Euclidean distance, Hausdorff distance, and comparison of specific anatomical landmarks depends on the desired level of precision and the specific needs of the study. The Euclidean distance is simple and suitable for quick analyses, but it may not capture all the nuances of a complex facial surface [55]. The Hausdorff distance is more robust and detects small variations, making it ideal for complex facial shapes [56]. Comparing specific anatomical landmarks provides a detailed and specific analysis of facial asymmetry, but it is more complex and prone to errors. Each method has its advantages and limitations, and the choice should be influenced by the context of facial asymmetry study. What remains evident is the need to properly measure facial asymmetries, which are visual markers of a series of pathologies [57]. Combining Hausdorff distances with the comparison of anatomical landmarks can be a good alternative for measuring small facial asymmetries important to detect in dysmorphology studies.
Finally, it is important to point out the limitation of the sample size, which consisted of fifteen individuals. The decision to conduct a small-scale exploratory study was driven by the need to refine the methodology and establish preliminary parameters. Despite this limitation, the study involved forty-five distinct meshes and required extensive computational efforts, particularly when comparing the spatial distributions of Hausdorff distances between points of each mesh after being clipped into two halves. The primary objective was to evaluate whether cell-phone images—captured using both the photographic camera and the video camera—could generate 3D models of human faces with sufficient accuracy to assess facial asymmetries, employing photogrammetry for still images and videogrammetry for video footage, both facilitated by a coded cap. The findings demonstrate that 3D models derived from cell phone images exhibit a high linear correlation with those produced by precise portable 3D scanners. It is worth noting that this study focused on comparing methodologies rather than analyzing the clinical parameters of the participants. These promising results provide a strong foundation for the next step: validating the proposed method in a larger population. This is considered to be particularly significant given the approach cost-effectiveness, which could reduce the financial burden on public health systems.

5. Conclusions

This investigation aimed to determine whether cell phone images from either the photographic camera or the video camera can be used to create 3D models of human faces that are accurate enough to assess facial asymmetries using photogrammetry for the former and videogrammetry for the latter, all together with a coded cap. The findings indicate that 3D models generated from cell phone images exhibit a high linear correlation with those obtained from accurate portable 3D scanners. However, the final results depend on the technology used, and not only on the accuracy of the system; stability and reliability also depend on the behavior of the patient, i.e., involuntary movements of the patient to choose the technology, 3D scanning vs. image-based photogrammetry/videogrammetry. Moreover, the analysis supports the claim that, when analyzing means, medians, and standard deviations, through a series of distinct hypothesis tests and with a 95% confidence level, there is no significant dissimilarity between these 3D cell phone models and those achieved with accurate and expensive portable 3D scanners. The flexibility of handle cell phones is demonstrated, and sometimes, videogrammetry outperforms (4/15 = 26.7/100) both 3D scanning and photogrammetry, with only a limited number of down performances (2/15 = 13.3/100).
While the investigation results show a high correlation between 3D cell phone models and portable 3D scanners, it is important to consider the possible biases and limitations of this approach. The accuracy of the models heavily depends on the quality of the cell phone cameras and the control over patient movements during image capture [58]. Furthermore, comparing 3D models generated by different technologies can introduce variabilities not accounted for in the statistical tests [59]. Combining different methodologies, such as Hausdorff distance [60] and the comparison of anatomical landmarks, could improve the detection of subtle facial asymmetries, but this approach also requires greater precision in landmark marking and may be more susceptible to measurement errors. Therefore, future studies should focus on optimizing and validating these methodologies to ensure consistent and applicable results in clinical practice.
For future studies on facial asymmetry, alternative metrics will be explored, such as measuring distances or ratios between key facial landmarks to discern anthropometric metrics; alternatively, coded targets might also be used to increase the accuracy [21]. Additionally, subdividing 3D models into smaller sections based on specific landmark sets [61] could provide more granular insights into asymmetry indices. Future research will expand the sample to encompass a broader age range, including a more stratified representation across gender groups [62], given that facial asymmetries are more prevalent among older individuals and cisgender males. Furthermore, focusing on specific facial regions that are particularly prone to asymmetry and biometric analyses is a challenge that will allow us to evaluate the accuracy and reliability of the proposed methodology in these targeted patients.

Author Contributions

Conceptualization, L.C.T.C., O.C.Q.-E. and J.L.L.; methodology, L.C.T.C., O.C.Q.-E., J.L.L., M.F.C.P., F.M.d.C. and A.L.M.M.F.; software L.C.T.C., J.L.L., M.F.C.P. and F.A.A.; validation, L.C.T.C., M.F.C.P. and F.A.A.; formal analysis, L.C.T.C., O.C.Q.-E. and J.L.L.; investigation, L.C.T.C., O.C.Q.-E. and J.L.L.; resources, J.L.L.; data curation, L.C.T.C. and J.L.L.; writing—original draft preparation, L.C.T.C., M.F.C.P., F.M.d.C., A.L.M.M.F. and F.A.A.; writing—review and editing, O.C.Q.-E. and J.L.L.; visualisation, L.C.T.C. and J.L.L.; supervision, L.C.T.C. and J.L.L.; project administration, L.C.T.C. and J.L.L.; funding acquisition, L.C.T.C. and J.L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Asociación de Científicos Españoles en Brasil (ACEBRA), Fundación Ramón Areces, the Centre de Cooperació al Desenvolupament (CCD—Cooperació 2023) from the Universitat Politècnica de València, and last but not least, the Instituto de Salud Carlos III under project number PI22/01416 and joint financing by the European Union.

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Research Committee of the Universitat Politècnica de València (protocol code no. P03-29-04-2022 on 27 July 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request. Please contact the corresponding author.

Acknowledgments

The authors acknowledge the contributions of the GIFLE team at the Universitat Politècnica de València (especially Juan José Valero Lanzuela and Miriam Cabrelles López), of the LFSR team at Universidade do Estado do Rio de Janeiro (especially Luiz Felipe de Almeida Furtado, Bruna da Costa Alves, Mateus Álvares Sousa and Ygor Demetrio Pereira Dias da Costa), of the LEMC team at Instituto Oswaldo Cruz (especially Fernando Regla Vargas and Bruna dos Reis) and of the Center for Technological Innovation at Fiocruz (Aline Morais and Ana Carolina Carvalho). Matheus Ferreira Coelho Pinho receives a PhD scholarship from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, Brazil (Process: 88887.849129/2023-00), for the PhD Programme of Computational Sciences and Mathematical Modeling at the Rio de Janeiro State University. Ana Luiza Meneguci Moreira Franco receives a scholarship from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, Brazil (Process: 88887.832293/2023-00), for the PhD Programme in Genetics at the Federal University of Rio de Janeiro. Francisco Airasca Altónaga is a Mechanical Engineering student at UPV, Spain, and received a Centre de Cooperació al Desenvolupament MERIDIES scholarship to study as an exchange student at the Rio de Janeiro State University.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-Dimensional
CPControl Point
GIFLEGrupo de Investigación en Fotogrametría y Laser Escáner
LEMCLaboratório de Epidemiologia das Malformações Congênitas
LFSRLaboratório de Fotogrametria e Sensoriamento Remoto
rANOVARepeated Measures Analysis of Variance
UPVUniversitat Politècnica de València

References

  1. Thiesen, G.; Gribel, B.F.; Freitas, M.P.M. Facial asymmetry: A current review. Dent. Press J. Orthod. 2015, 20, 110–125. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, J.; Wang, S.; Lin, L. Exploring Progression and Differences in Facial Asymmetry for Hemifacial Microsomia and Isolated Microtia: Insights from Extensive 3D Analysis. Aesthetic Plast. Surg. 2024, 48, 4239–4251. [Google Scholar] [CrossRef]
  3. Leger, K.; Dong, J.; DeBruine, L.M.; Jones, B.C.; Shiramizu, V.K.M. Assessing the Roles of Symmetry, Prototypicality, and Sexual Dimorphism of Face Shape in Health Perceptions. Adapt. Hum. Behav. Physiol. 2024, 10, 18–30. [Google Scholar] [CrossRef]
  4. Calandrelli, R.; Pilato, F.; Massimi, L.; D’Apolito, G.; Tuzza, L.; Gaudino, S. Computed tomography quantitative analysis of cranial vault dysmorphology and severity of facial complex changes in posterior synostotic plagiocephaly patients. Child's Nerv. Syst. 2024, 40, 779–790. [Google Scholar] [CrossRef]
  5. Reddy, N.V.V.; Potturi, A.; Rajan, R.; Jhawar, D.; Bhushan, Y.W.B.; Pasupuleti, A. Facial Asymmetry—Demystifying the Entity. J. Maxillofac. Oral Surg. 2023, 22, 749–761. [Google Scholar] [CrossRef] [PubMed]
  6. Cheong, Y.-W.; Lo, L.-J. Facial Asymmetry: Etiology, Evaluation, and Management. Chang Gung Med. J. 2011, 34, 341–351. Available online: http://cgmj.cgu.edu.tw/ (accessed on 15 October 2024).
  7. Li, H.; Wang, J.; Song, T. 3D Printing Technique Assisted Autologous Costal Cartilage Augmentation Rhinoplasty for Patients with Radix Augmentation Needs and Nasal Deformity after Cleft Lip Repair. J. Clin. Med. 2022, 11, 7439. [Google Scholar] [CrossRef]
  8. Ramos, M.O.D.R.; Curi, J.P.; Baldasso, R.P.; Beaini, T.L. Reconhecimento Facial na Prática Forense: Uma Análise dos Documentos Disponibilizados pelo FISWG [Facial Recognition in Forensic Practice: An Analysis of Documents Made Available by FISWG]. Rev. Bras. Odontol. Legal 2022, 9, 98–113. [Google Scholar] [CrossRef]
  9. Linden, O.E.; He, J.K.; Morrison, C.S.; Sullivan, S.R.; Taylor, H.O.B. The Relationship between Age and Facial Asymmetry. Plast. Reconstr. Surg. 2018, 142, 1145–1152. [Google Scholar] [CrossRef]
  10. Coleman, S.R.; Grover, R. The Anatomy of the Aging Face: Volume Loss and Changes in 3-Dimensional Topography. Aesthet. Surg. J. 2006, 26, S4–S9. [Google Scholar] [CrossRef]
  11. Claes, P.; Walters, M.; Shriver, M.D.; Puts, D.; Gibson, G.; Clement, J.; Baynam, G.; Verbeke, G.; Vandermeulen, D.; Suetens, P. Sexual Dimorphism in Multiple Aspects of 3D Facial Symmetry and Asymmetry Defined by Spatially Dense Geometric Morphometrics. J. Anat. 2012, 217, 294–305. [Google Scholar] [CrossRef] [PubMed]
  12. Singh, P.; Bornstein, M.M.; Hsung, R.T.-C.; Ajmera, D.H.; Leung, Y.Y.; Gu, M. Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review. Diagnostics 2024, 14, 423. [Google Scholar] [CrossRef] [PubMed]
  13. Baserga, C.; Cappella, A.; Gibelli, D.M.; Sacco, R.; Dolci, C.; Cullati, F.; Giannì, A.B.; Sforza, C. Efficacy of Autologous Fat Grafting in Restoring Facial Symmetry in Linear Morphea-Associated Lesions. Symmetry 2020, 12, 2098. [Google Scholar] [CrossRef]
  14. Codari, M.; Pucciarelli, V.; Stangoni, F.; Zago, M.; Tarabbia, F.; Biglioli, F.; Sforza, C. Facial Thirds–Based Evaluation of Facial Asymmetry Using Stereophotogrammetric Devices: Application to Facial Palsy Subjects. J. Cranio-Maxillofac. Surg. 2017, 45, 76–81. [Google Scholar] [CrossRef]
  15. Kwon, S.-H.; Choi, J.W.; Kim, H.J.; Lee, W.S.; Kim, M.; Shin, J.-W.; Na, J.-I.; Park, K.-C.; Huh, C.-H. Three-Dimensional Photogrammetric Study on Age-Related Facial Characteristics in Korean Females. Ann. Dermatol. 2021, 33, 52–60. [Google Scholar] [CrossRef]
  16. Harshit; Jain, K.; Zlatanova, S. Advancements in Open-Source Photogrammetry with a Point Cloud Standpoint. Appl. Geomat. 2023, 15, 781–794. [Google Scholar] [CrossRef]
  17. Claes, P.; Vandermeulen, D.; De Greef, S.; Willems, G.; Clement, J.G.; Suetens, P. Computerized Craniofacial Reconstruction: Conceptual Framework and Review. Forensic Sci. Int. 2010, 201, 138–145. [Google Scholar] [CrossRef]
  18. Indencleef, K.; Hoskens, H.; Lee, M.K.; White, J.D.; Liu, C.; Eller, R.J.; Naqvi, S.; Wehby, G.L.; Moreno Uribe, L.M.; Hecht, J.T.; et al. The Intersection of the Genetic Architectures of Orofacial Clefts and Normal Facial Variation. Front. Genet. 2021, 12, 626403. [Google Scholar] [CrossRef]
  19. Barbero-García, I.; Lerma, J.L.; Mora-Navarro, G. Fully automatic smartphone-based photogrammetric 3D modelling of infant's heads for cranial deformation analysis. Isprs J. Photogramm. Remote. Sens. 2020, 169, 197–206. [Google Scholar] [CrossRef]
  20. Quispe-Enriquez, O.C.; Valero-Lanzuela, J.J.; Lerma, J.L. Smartphone Photogrammetric Assessment for Head Measurements. Sensors 2023, 23, 9008. [Google Scholar] [CrossRef]
  21. Quispe-Enriquez, O.C.; Valero-Lanzuela, J.J.; Lerma, J.L. Craniofacial 3D Morphometric Analysis with Smartphone-Based Photogrammetry. Sensors 2024, 24, 230. [Google Scholar] [CrossRef]
  22. Baselga, S.; Mora-Navarro, G.; Lerma, J.L. Assessment of Cranial Deformation Indices by Automatic Smartphone-Based Photogrammetric Modelling. Appl. Sci. 2022, 12, 11499. [Google Scholar] [CrossRef]
  23. Barbero-García, I.; Pierdicca, R.; Paolanti, M.; Felicetti, A.; Lerma, J.L. Combining machine learning and close-range photogrammetry for infant's head 3D measurement: A smartphone-based solution. Measurement 2021, 182, 109686. [Google Scholar] [CrossRef]
  24. Gibelli, D.; Pucciarelli, V.; Cappella, A.; Dolci, C.; Sforza, C. Are Portable Stereophotogrammetric Devices Reliable in Facial Imaging? A Validation Study of VECTRA H1 Device. J. Oral Maxillofac. Surg. 2018, 76, 1772–1784. [Google Scholar] [CrossRef]
  25. Cappella, A.; Solazzo, R.; Yang, J.; Hassan, N.M.; Dolci, C.; Gibelli, D.; Tartaglia, G.; Sforza, C. Facial Asymmetry of Italian Children: A Cross-Sectional Analysis of Three-Dimensional Stereophotogrammetric Reference Values. Symmetry 2023, 15, 792. [Google Scholar] [CrossRef]
  26. Blahnik, V.; Schindelbeck, O. Smartphone Imaging Technology and Its Applications. Adv. Opt. Technol. 2021, 10, 145–232. [Google Scholar] [CrossRef]
  27. Camera Specs & Features—Galaxy S22 Ultra, S22+ & S22 5G. Available online: https://www.samsung.com/uk/support/mobile-devices/check-out-the-new-camera-functions-of-the-galaxy-s22-series (accessed on 18 October 2024).
  28. Creaform Incluye el Escáner 3D ACADEMIA 50 a su Paquete de Soluciones Educativas. Available online: https://www.creaform3d.com/es/acerca-de-creaform/sala-de-prensa/comunicados-de-prensa/creaform-incluye-el-escaner-3d-academia-50 (accessed on 18 October 2024).
  29. Paquete de Aplicaciones y Plataforma de Software de Medición 3D. Available online: https://www.creaform3d.com/es/soluciones-de-metrologia/plataformas-de-software-de-aplicaciones-3d (accessed on 18 October 2024).
  30. Agisoft Metashape Features. Available online: https://www.agisoft.com/features/professional-edition/ (accessed on 18 October 2024).
  31. Eltner, A.; Sofia, G. Chapter 1—Structure from Motion Photogrammetric Technique. In Remote Sensing of Geomorphology; Tarolli, P., Mudd, S.M., Eds.; Volume 23 of Developments in Earth Surface Processes; Elsevier: Amsterdam, The Netherlands, 2020; pp. 1–24. [Google Scholar] [CrossRef]
  32. CloudCompare—3D Point Cloud and Mesh Processing Software—Open Source Project. Available online: https://www.danielgm.net/cc/ (accessed on 18 October 2024).
  33. MeshLab Features. Available online: https://www.meshlab.net (accessed on 18 October 2024).
  34. Blender Creation Suite—About. Available online: https://www.blender.org/about/ (accessed on 18 October 2024).
  35. Gripp, K.W.; Slavotinek, A.M.; Hall, J.G.; Allanson, J.E. Handbook of Physical Measurements; Oxford University Press: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  36. Rupic, I.; Čuković-Bagić, I.; Ivković, V.; Lauc, T. Assessment of Facial Landmarks for Bone Asymmetry in Geometric Morphometric Studies: A Review. South Eur. J. Orthod. Dent. Res. 2023, 7, 29735. [Google Scholar] [CrossRef]
  37. Nutanong, S.; Jacox, E.H.; Samet, H. An incremental Hausdorff distance calculation algorithm. Proc. Vldb Endow. 2011, 4, 506–517. [Google Scholar] [CrossRef]
  38. Andújar, C.; Brunet, P.; Ayala, D. Topology-reducing surface simplification using a discrete solid representation. ACM Trans. Graph. (TOG) 2002, 21, 88–105. [Google Scholar] [CrossRef]
  39. Butt, M.A.; Maragos, P. Optimum design of chamfer distance transforms. IEEE Trans. Image Process. 1998, 7, 1477–1484. [Google Scholar] [CrossRef]
  40. Gower, J.C. Generalized procrustes analysis. Psychometrika 1975, 40, 33–51. [Google Scholar] [CrossRef]
  41. Kraft, D. Computing the Hausdorff distance of two sets from their distance functions. Int. J. Comput. Geom. Appl. 2020, 30, 19–49. [Google Scholar] [CrossRef]
  42. Cignoni, P.; Callieri, M.; Corsini, M.; Dellepiane, M.; Ganovelli, F.; Ranzuglia, G. MeshLab: An Open-Source Mesh Processing Tool. In Proceedings of the Sixth Eurographics Italian Chapter Conference, Salerno, Italy, 2–4 July 2008; Volume 1, pp. 129–136. [Google Scholar]
  43. Arnett, G.W.; Bergman, R.T. Facial keys to orthodontic diagnosis and treatment planning—part II. Am. J. Orthod. Dentofac. Orthop. 1993, 103, 395–411. [Google Scholar] [CrossRef]
  44. Coelho, L.C.; e Silva Brito, J.L.N. Fotogrametria Digital, 1st ed.; EdUERJ: Rio de Janeiro, RJ, Brasil, 2007. [Google Scholar]
  45. Cohen, I.; Huang, Y.; Chen, J.; Benesty, J. Pearson correlation coefficient. In Noise Reduction in Speech Processing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–4. [Google Scholar]
  46. Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers; John Wiley and Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  47. Hsu, H.; Lachenbruch, P.A. Paired t test. In Wiley StatsRef: Statistics Reference Online; John Wiley and Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  48. Kim, T.K. T test as a parametric statistic. Korean J. Anesthesiol. 2015, 68, 540–546. [Google Scholar] [CrossRef]
  49. Statistics Online Support. Available online: https://sites.utexas.edu/sos/ (accessed on 2 November 2024).
  50. Park, E.; Cho, M.; Ki, C.S. Correct use of repeated measures analysis of variance. Korean J. Lab. Med. 2009, 29, 1–9. [Google Scholar] [CrossRef] [PubMed]
  51. Huck, S.W.; McLean, R.A. Using a repeated measures ANOVA to analyze the data from a pretest-posttest design: A potentially confusing task. Psychol. Bull. 1975, 82, 511. [Google Scholar] [CrossRef]
  52. Zar, J.H. Biostatistical Analysis; Pearson Education International: Upper Saddle River, NJ, USA; London, UK, 2010. [Google Scholar]
  53. Getting Started with the Kruskal-Wallis Test. Available online: https://library.virginia.edu/data/articles/getting-started-with-the-kruskal-wallis-test (accessed on 2 November 2024).
  54. Quinzi, V.; Polizzi, A.; Ronsivalle, V.; Santonocito, S.; Conforte, C.; Manenti, R.J.; Isola, G.; Lo Giudice, A. Facial Scanning Accuracy with Stereophotogrammetry and Smartphone Technology in Children: A Systematic Review. Children 2022, 9, 1390. [Google Scholar] [CrossRef] [PubMed]
  55. Zhu, Y.; Zhao, Y.; Wang, Y. A Review of Three-Dimensional Facial Asymmetry Analysis Methods. Symmetry 2022, 14, 1414. [Google Scholar] [CrossRef]
  56. Bernini, J.M.; Kellenberger, C.J.; Eichenberger, M.; Eliades, T.; Papageorgiou, S.N.; Patcas, R. Quantitative analysis of facial asymmetry based on three-dimensional photography: A valuable indicator for asymmetrical temporomandibular joint affection in juvenile idiopathic arthritis patients? Pediatr. Rheumatol. 2020, 18, 10. [Google Scholar] [CrossRef]
  57. Paradowska-Stolarz, A.M.; Ziomek, M.; Sluzalec-Wieckiewicz, K.; Duś-Ilnicka, I. Most common congenital syndromes with facial asymmetry: A narrative review. Dent. Med. Probl. 2024; online ahead of print. [Google Scholar] [CrossRef]
  58. Luo, Y.; Zhao, M.; Lu, J. Accuracy of Smartphone-Based Three-Dimensional Facial Scanning System: A Systematic Review. Aesthetic Plast. Surg. 2024, 48, 4500–4512. [Google Scholar] [CrossRef] [PubMed]
  59. Van Lint, L.; Christiaens, L.; Stroo, V.; Bila, M.; Willaert, R.; Sun, Y.; Van Dessel, J. Accuracy Comparison of 3D Face Scans Obtained by Portable Stereophotogrammetry and Smartphone Applications. J. Med. Biol. Eng. 2023, 43, 550–560. [Google Scholar] [CrossRef]
  60. Jesorsky, O.; Kirchberg, K.J.; Frischholz, R.W. Robust Face Detection Using the Hausdorff Distance. In Proceedings of the Third International Conference on Audio- and Video-based Biometric Person Authentication, LNCS-2091, Halmstad, Sweden, 6–8 June 2001; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2001; pp. 90–95. [Google Scholar]
  61. Palmer, R.L.; Helmholz, P.; Baynam, G. CLINIFACE: Phenotypic Visualisation and Analysis Using Non-Rigid Registration of 3D Facial Images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, XLIII-B2-2020, 301–308. [Google Scholar] [CrossRef]
  62. Little, A.C.; Jones, B.C.; Waitt, C.; Tiddeman, B.P.; Feinberg, D.R.; Perrett, D.I.; Apicella, C.L.; Marlowe, F.W. Symmetry Is Related to Sexual Dimorphism in Faces: Data Across Culture and Species. PLoS ONE 2008, 3, e2106. [Google Scholar] [CrossRef]
Figure 1. Workflow for the experiment, comprising data collection, data treatment, and statistical analysis.
Figure 1. Workflow for the experiment, comprising data collection, data treatment, and statistical analysis.
Symmetry 17 00376 g001
Figure 2. Academia 50 3D scanner in use.
Figure 2. Academia 50 3D scanner in use.
Symmetry 17 00376 g002
Figure 3. A screenshot of Agisoft Metashape showing image orientation referred to one of the patients who volunteered for this project.
Figure 3. A screenshot of Agisoft Metashape showing image orientation referred to one of the patients who volunteered for this project.
Symmetry 17 00376 g003
Figure 4. Diagram showing the landmarks on which stickers were placed.
Figure 4. Diagram showing the landmarks on which stickers were placed.
Symmetry 17 00376 g004
Figure 5. Patient being scanned with the 3D scanner (left) and the cell phone for images and video (right).
Figure 5. Patient being scanned with the 3D scanner (left) and the cell phone for images and video (right).
Symmetry 17 00376 g005
Figure 6. Diagram showing the cell phone path for capturing images and video.
Figure 6. Diagram showing the cell phone path for capturing images and video.
Symmetry 17 00376 g006
Figure 7. Close-up of Patient 8's mouth. It is possible to notice how involuntary movements lead to poor quality meshes, especially for the ACADEMIA 50 3D scanner (c); it is less apparent in photogrammetry (a) and videogrammetry (b) 3D models.
Figure 7. Close-up of Patient 8's mouth. It is possible to notice how involuntary movements lead to poor quality meshes, especially for the ACADEMIA 50 3D scanner (c); it is less apparent in photogrammetry (a) and videogrammetry (b) 3D models.
Symmetry 17 00376 g007
Figure 8. Three-dimensional models for Patient 4 side-by-side: (a) 3D scanner, (b) photogrammetry, (c) videogrammetry.
Figure 8. Three-dimensional models for Patient 4 side-by-side: (a) 3D scanner, (b) photogrammetry, (c) videogrammetry.
Symmetry 17 00376 g008
Figure 9. Facial landmarks defining the plane to be used for cutting face models in two halves (shown in the diagram in green and red). The model is cut according to the plane defined by those marks; then, its left part is mirrored and the asymmetries between both surfaces are calculated.
Figure 9. Facial landmarks defining the plane to be used for cutting face models in two halves (shown in the diagram in green and red). The model is cut according to the plane defined by those marks; then, its left part is mirrored and the asymmetries between both surfaces are calculated.
Symmetry 17 00376 g009
Figure 10. Hausdorff distances between the two halves (for the photogrammetry models of Patient 15) represented as a heatmap with higher distances in blue and lower distances in red (units in mm).
Figure 10. Hausdorff distances between the two halves (for the photogrammetry models of Patient 15) represented as a heatmap with higher distances in blue and lower distances in red (units in mm).
Symmetry 17 00376 g010
Figure 11. Spatial distribution of Hausdorff distances (in mm) between the two halves of Patient 13's face, according to the models derived from (a) photogrammetry, (b) videogrammetry, and (c) the 3D scanner. Areas in blue are more asymmetrical, whereas areas in red are much more symmetrical.
Figure 11. Spatial distribution of Hausdorff distances (in mm) between the two halves of Patient 13's face, according to the models derived from (a) photogrammetry, (b) videogrammetry, and (c) the 3D scanner. Areas in blue are more asymmetrical, whereas areas in red are much more symmetrical.
Symmetry 17 00376 g011
Figure 12. Frequency histograms for Hausdorff distances for the two halves of each patient. For each histogram, the mean is represented as a dotted black line, standard deviations are represented as dotted red lines and the median is represented as a dotted green line.
Figure 12. Frequency histograms for Hausdorff distances for the two halves of each patient. For each histogram, the mean is represented as a dotted black line, standard deviations are represented as dotted red lines and the median is represented as a dotted green line.
Symmetry 17 00376 g012
Figure 13. Data extracted from Table 3, as a box plot.
Figure 13. Data extracted from Table 3, as a box plot.
Symmetry 17 00376 g013
Table 1. Official specifications for the smartphone chosen for this experiment. For data collection (both photos and videos), the rear-wide camera (specifications in bold) was chosen.
Table 1. Official specifications for the smartphone chosen for this experiment. For data collection (both photos and videos), the rear-wide camera (specifications in bold) was chosen.
CameraSpecifications
Front-Wide10 MP F2.2 [Dual Pixel AF], FOV 80°, 1/3.24”, 1.22 μm
Rear-Ultra-Wide12 MP F2.2 [FF], FOV 120°, 1/2.55”, 1.4 μm
Rear-Wide angle50 MP F1.8 [Dual Pixel AF], OIS, FOV 85°, 1/1.56”, 1.0 μm with Adaptive Pixel
Rear-Telephoto 110 MP F2.4 [3× PDAF], OIS FOV 36°, 1/3.94”, 1.0 μm
Rear-Space Zoom3× Optical Zoom Super Resolution Zoom up to 30×
Table 2. Data collection for patients with the ACADEMIA 50 3D scanner, and the smartphone in camera and video mode.
Table 2. Data collection for patients with the ACADEMIA 50 3D scanner, and the smartphone in camera and video mode.
Patient3D Scanning TimePhotogrammetric TimeNumber of ImagesVideogrammetry TimeNumber of Video Frames
15 min1 min5248 s144
26 min1 min5152 s156
35 min1 min5049 s147
47 min1 min4857 s171
55 min1 min4747 s141
64 min1 min5147 s141
76 min1 min4749 s147
813 min1 min5645 s135
97 min1 min6559 s177
105 min1 min5335 s105
1111 min1 min5063 s189
125 min1 min4747 s141
134 min1 min4847 s141
146 min1 min4745 s135
155 min1 min5361 s183
Table 3. Measures of central tendency in mm (rounded to one decimal place) based on the histograms of Hausdorff distances between face halves for each patient, for models generated from photogrammetry, videogrammetry, and 3D scanning.
Table 3. Measures of central tendency in mm (rounded to one decimal place) based on the histograms of Hausdorff distances between face halves for each patient, for models generated from photogrammetry, videogrammetry, and 3D scanning.
PhotogrammetryVideogrammetry3D Scanner
Patient X ¯ Med s X X ¯ Med s X X ¯ Med s X
12.11.81.91.81.31.51.51.31.2
22.01.81.91.91.61.60.90.61.1
34.94.54.83.83.12.54.23.72.3
43.73.02.82.52.11.33.22.72.4
52.01.41.92.21.42.31.81.31.4
62.92.12.83.12.72.23.83.63.6
72.72.72.21.61.01.71.92.01.1
82.42.12.01.51.11.44.33.83.0
90.80.60.81.71.41.31.81.52.1
103.02.72.03.72.63.24.14.33.5
112.31.72.03.22.82.52.32.01.7
122.01.71.52.61.62.72.01.61.4
132.01.51.72.01.71.72.92.62.4
141.71.61.31.20.91.00.80.60.7
153.02.62.32.42.31.22.92.52.5
Table 4. Paired t-Tests for the mean values of the histograms obtained from each method.
Table 4. Paired t-Tests for the mean values of the histograms obtained from each method.
corr TT-Critical (Two-Tailed)p-ValueRecommendation
Photogrammetry versus 3D Scanner0.663−0.2762.1450.787Do not reject H 0 .
Videogrammetry versus 3D Scanner0.604−0.8932.1450.387Do not reject H 0 .
Photogrammetry versus Videogrammetry0.6520.7942.1450.440Do not reject H 0 .
Table 5. rMANOVA test for the mean values of the histograms obtained from all three methods.
Table 5. rMANOVA test for the mean values of the histograms obtained from all three methods.
FF-Criticalp-ValueRecommendation
0.4973.3400.614Do not reject H 0 .
Table 6. Wilcoxon–Mann–Whitney test for the median values of the histograms obtained from each method.
Table 6. Wilcoxon–Mann–Whitney test for the median values of the histograms obtained from each method.
UU-CriticalRecommendation
Photogrammetry versus 3D Scanner9864Do not reject H 0 .
Videogrammetry versus 3D Scanner7564Do not reject H 0 .
Photogrammetry versus Videogrammetry7564Do not reject H 0 .
Table 7. Kruskal–Wallis test for the median values of the histograms obtained from all three methods.
Table 7. Kruskal–Wallis test for the median values of the histograms obtained from all three methods.
Hp-ValueRecommendation
4.4750.107Do not reject H 0 .
Table 8. rMANOVA test for the variances of the histograms obtained from all three methods.
Table 8. rMANOVA test for the variances of the histograms obtained from all three methods.
FF-Criticalp-ValueRecommendation
0.5163.3400.603Do not reject H 0 .
Table 9. Paired t-Tests for the mean values of the histograms obtained from each method—without Patient 8.
Table 9. Paired t-Tests for the mean values of the histograms obtained from each method—without Patient 8.
corr TT-Critical (Two-Tailed)p-ValueRecommendation
Photogrammetry versus 3D Scanner0.7470.3572.1600.727Do not reject H 0 .
Videogrammetry versus 3D Scanner0.839−0.1792.1600.861Do not reject H 0 .
Photogrammetry versus Videogrammetry0.6720.5102.1600.618Do not reject H 0 .
Table 10. rMANOVA test for the mean values of the histograms obtained from all three methods—without Patient 8.
Table 10. rMANOVA test for the mean values of the histograms obtained from all three methods—without Patient 8.
FF-Criticalbp-ValueRecommendation
0.1523.3690.860Do not reject H 0 .
Table 11. Wilcoxon Mann–Whitney test for the median values of the histograms obtained from each method—without Patient 8.
Table 11. Wilcoxon Mann–Whitney test for the median values of the histograms obtained from each method—without Patient 8.
UU-CriticalRecommendation
Photogrammetry versus 3D Scanner13555Do not reject H 0 .
Videogrammetry versus 3D Scanner12455Do not reject H 0 .
Photogrammetry versus Videogrammetry13955Do not reject H 0 .
Table 12. Kruskal–Wallis test for the median values of the histograms obtained from all three methods—without Patient 8.
Table 12. Kruskal–Wallis test for the median values of the histograms obtained from all three methods—without Patient 8.
Hp-ValueRecommendation
0.6050.739Do not reject H 0 .
Table 13. rMANOVA test for the variances of the histograms obtained from all three methods—without Patient 8.
Table 13. rMANOVA test for the variances of the histograms obtained from all three methods—without Patient 8.
FF-Criticalp-ValueRecommendation
0.4723.3690.629Do not reject H 0 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Teixeira Coelho, L.C.; Pinho, M.F.C.; Martinez de Carvalho, F.; Meneguci Moreira Franco, A.L.; Quispe-Enriquez, O.C.; Altónaga, F.A.; Lerma, J.L. Evaluating the Accuracy of Smartphone-Based Photogrammetry and Videogrammetry in Facial Asymmetry Measurement. Symmetry 2025, 17, 376. https://doi.org/10.3390/sym17030376

AMA Style

Teixeira Coelho LC, Pinho MFC, Martinez de Carvalho F, Meneguci Moreira Franco AL, Quispe-Enriquez OC, Altónaga FA, Lerma JL. Evaluating the Accuracy of Smartphone-Based Photogrammetry and Videogrammetry in Facial Asymmetry Measurement. Symmetry. 2025; 17(3):376. https://doi.org/10.3390/sym17030376

Chicago/Turabian Style

Teixeira Coelho, Luiz Carlos, Matheus Ferreira Coelho Pinho, Flávia Martinez de Carvalho, Ana Luiza Meneguci Moreira Franco, Omar C. Quispe-Enriquez, Francisco Airasca Altónaga, and José Luis Lerma. 2025. "Evaluating the Accuracy of Smartphone-Based Photogrammetry and Videogrammetry in Facial Asymmetry Measurement" Symmetry 17, no. 3: 376. https://doi.org/10.3390/sym17030376

APA Style

Teixeira Coelho, L. C., Pinho, M. F. C., Martinez de Carvalho, F., Meneguci Moreira Franco, A. L., Quispe-Enriquez, O. C., Altónaga, F. A., & Lerma, J. L. (2025). Evaluating the Accuracy of Smartphone-Based Photogrammetry and Videogrammetry in Facial Asymmetry Measurement. Symmetry, 17(3), 376. https://doi.org/10.3390/sym17030376

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop