Next Article in Journal
Ultrafast Brain MRI at 3 T for MS: Evaluation of a 51-Second Deep Learning-Enhanced T2-EPI-FLAIR Sequence
Previous Article in Journal
AI-Enhanced ECG Applications in Cardiology: Comprehensive Insights from the Current Literature with a Focus on COVID-19 and Multiple Cardiovascular Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Algorithm for Forensic Identification Using Geometric Cranial Patterns in Digital Lateral Cephalometric Radiographs in Forensic Dentistry

by
Shahab Kavousinejad
1,
Mohsen Yazdanian
1,2,
Mohammad Mahboob Kanafi
3 and
Elahe Tahmasebi
1,2,*
1
Research Center for Prevention of Oral and Dental Diseases, Baqiyatallah University of Medical Sciences, Tehran 1435916471, Iran
2
School of Dentistry, Baqiyatallah University of Medical Sciences, Tehran 1435916471, Iran
3
Human Genetics Research Centre, Baqiyatallah University of Medical Science, Tehran 1435916471, Iran
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(17), 1840; https://doi.org/10.3390/diagnostics14171840
Submission received: 3 August 2024 / Revised: 20 August 2024 / Accepted: 21 August 2024 / Published: 23 August 2024
(This article belongs to the Section Medical Imaging and Theranostics)

Abstract

:
Lateral cephalometric radiographs are crucial in dentistry and orthodontics for diagnosis and treatment planning. However, their use in forensic identification, especially with burned bodies or in mass disasters, is challenging. AM (antemortem) and PM (postmortem) radiographs can be compared for identification. This study introduces and evaluates a novel algorithm for extracting cranial patterns from digital lateral cephalometric radiographs for identification purposes. Due to the unavailability of AM cephalograms from deceased individuals, the algorithm was tested using pre- and post-treatment cephalograms of living individuals from an orthodontic archive, considered as AM and PM data. The proposed algorithm encodes cranial patterns into a database for future identification. It matches PM cephalograms with AM records, accurately identifying individuals by comparing cranial features. The algorithm achieved an accuracy of 97.5%, a sensitivity of 97.7%, and a specificity of 95.2%, correctly identifying 350 out of 358 cases. The mean similarity score improved from 91.02% to 98.10% after applying the Automatic Error Reduction (AER) function. Intra-observer error analysis showed an average Euclidean distance of 3.07 pixels (SD = 0.73) for repeated landmark selections. The proposed algorithm shows promise for identity recognition based on cranial patterns and could be enhanced with artificial intelligence (AI) algorithms in future studies.

1. Introduction

Radiographic analysis has long been utilized in various medical disciplines to gain valuable insights into internal structures and aid in diagnosis. Cranial radiography includes posterior–anterior (frontal) and lateral cephalometric X-rays, as well as other views such as the submentovertex and occipitomental projections (Waters’ view). Cephalometric analysis is utilized in orthodontics and maxillofacial surgery to diagnose and address potential abnormalities in the jaws and teeth [1]. This analysis involves measuring both angles and distances. By identifying reference points in the craniofacial skeleton and creating reference lines, we can measure the linear distances and angles between these lines. Comparing these measurements to normal values helps detect skeletal and dental anomalies [2]. Other applications include predicting growth through cervical vertebra analysis [3], evaluating pituitary tumors [4] (such as sella turcica enlargement), and monitoring changes from orthodontic and orthognathic surgery treatments [5].
In addition to these benefits, it can also be employed in the identification of bodies, particularly in cases where soft tissue has been completely lost [6]. However, when it comes to forensic identification, accurately determining the identity of individuals based solely on radiographs presents a significant challenge. In such complex scenarios, where traditional identification methods, such as fingerprinting or DNA analysis, may simply not be feasible, the importance of using skeletal [7], hard tissue [8], and dental structures [9] for identity verification becomes evident [10]. This is performed by comparing antemortem (AM) and postmortem (PM) data such as radiographs [11,12].
Cranial hard tissue offers valuable information that can aid in identification purposes [13]. The cranial bones possess unique features and characteristics, including the shape and structure of the skull [14], which can provide clues about an individual’s identity [13]. These characteristics develop throughout an individual’s growth and development, influenced by a complex interplay between genetic and environmental factors [15]. Genetic and environmental factors [16] interact to influence cranial growth and development, leading to variations in the shape and geometric dimensions of the cranium among different individuals [17,18]. The cranial base matures around 7–8 years of age, followed by the cranial floor at 11–12 years of age (up to 15 years of age [19]), the neurocranium at 9–10 years of age, and the maxillomandibular structures at 15–16 years of age [14]. It is reported that male and female skulls differ anatomically, which is useful for gender identification [20].
Extensive research has demonstrated the utility of the frontal sinus as a valuable feature for human identification [21,22] and differentiating gender [23]. The frontal sinus is one of four pairs of paranasal sinuses, comprising asymmetrical, air-filled cavities located in the anterior region of the frontal bone. It serves to reduce the weight of the skull and protect the brain from injury. The formation of the frontal sinus begins around the fourth or fifth month of fetal development, with active growth occurring by the age of two or three [24,25]. It becomes radiographically visible by four or five years of age and continues to develop and morphologically change throughout puberty [24,25]. Typically, its growth is complete by age 20 [26], after which it remains stable in adulthood, unless affected by trauma, chronic sinus disease, or tumors [25]. Yoshino et al. [27] developed a classification system to assess the anatomical uniqueness of frontal sinuses, considering factors such as size, shape, asymmetry, and the presence of additional structures. Their findings identified over 20,000 possible variations, demonstrating the significant potential of frontal sinuses for personal identification. Various studies have explored identity verification based on radiological imaging. Beaini et al. [28] reported that using computed tomography (CT) scans allowed for the accurate segmentation and 3D reconstruction of frontal sinus volumes. In their pilot study involving 20 cone-beam CT scans, they demonstrated that 3D models of the frontal sinuses could be reliably matched to the same individual. Successful identification based on frontal sinuses has been achieved by comparing AM radiographs with PM radiographs [29,30,31]. Gómez et al. [32] introduced an AI-based framework for automating forensic comparative radiography, focusing on frontal sinuses. Their system includes segmentation, superposition, and decision-making methods, validating high-quality segmentation and achieving the automatic shortlisting of 40% of candidates.
However, it is noteworthy that the accuracy of using only frontal sinuses for identification may be relatively lower than considering the entire cranial structure in radiographs. Therefore, incorporating all cranial patterns into the identity verification process may significantly enhance accuracy. Cranial structures possess the potential to represent an individual’s unique identity, such as fingerprints or faces [33]. Therefore, achieving a higher accuracy rate in the context of identity verification is of paramount importance. The mandible is the largest facial bone, and its shape and features, including the gonial angle and mandibular canal, vary by age and gender [34]. Albalawi et al. [35] found that the angle between the gonion and menton, along with ramus dimensions, effectively indicates sexual dimorphism and is useful for sex determination in dental and medicolegal contexts. Bozkurt and Karagol [36] developed a fully automated method for segmenting jaws and teeth in panoramic dental radiographs (OPGs). Their approach achieved high accuracy, with a jaw separation ratio of 0.99 and detection rates of 0.90 for mandibular and 0.92 for maxillary teeth, demonstrating strong potential for automatic human identification. They suggested evaluating several features on lateral skull radiographs, including bigonial width, cranial height, bimaxillary breadth, and other facial measurements. Comparing these features in AM and PM records can provide valuable forensic information [37]. Frontal radiography (PA cephalometric analysis) [37] and cone-beam computed tomography (CT) [38] are both used in forensic identification. PA cephalometric analysis, using Caldwell’s or Waters’ view, assesses frontal sinus variations [39]. Frontal sinus variables include sinus area, height, and width measurements. Cone-beam CT provides 3D images of teeth and surrounding structures, aiding in comparison between AM and PM data, though artifacts from dental restorations can complicate analysis [40]. Although CBCT provides greater accuracy for identification due to its three-dimensional nature, it may not be cost-effective as AM evidence or for PM comparisons.
Several identification methods have been introduced in forensic dentistry, including the analysis of the frontal sinus, age estimation using wrist radiography and OPGs, and identification based on dental characteristics in OPGs, such as restorations, caries, and other features [41]. In previous studies, the use of cranial features in lateral cephalometric radiographs has received less attention. Existing research primarily focuses on frontal sinus features or dental structures, whereas a more comprehensive analysis of cranial features could provide more useful information for identity verification. Moreover, previous studies have not introduced a method (algorithm or software) for identity recognition using cranial skeletal structures in lateral cephalometric radiographs. The algorithm presented in this study introduces a novel approach by automatically extracting and analyzing cranial patterns from lateral cephalometric images. This method aims to address the existing limitations in current identification techniques and has the potential to enhance the accuracy of identity verification. This study presents a novel algorithm for biometric identification, using cranial landmarks in 2D lateral cephalometric radiographs (AM and PM). The algorithm extracts cranial patterns from AM radiographs, encodes them into a database, and utilizes this information for victim identification using PM radiographs. We hypothesized the following: (I) cranial patterns can be extracted from AM lateral cephalometric radiographs and stored in a database; (II) cranial patterns extracted from PM radiographs can be compared with the stored AM patterns to identify the best match; and (III) this process of comparing AM and PM patterns will enable the accurate identification of individuals.

2. Materials and Methods

In this study, a new algorithm called the K-Victim Identification Network (K-VIN) was designed and developed for identity recognition using cranial patterns in digital lateral cephalograms. The K-VIN algorithm aims to store AM cranial structural patterns in a database and, in the event of mass casualties such as a fire incident, match PM cranial structural patterns with the available cases in the database to identify the closest matches.

2.1. The Proposed K-VIN Algorithm

Figure 1 illustrates the K-VIN algorithm, which comprises three parts: new, victim, and identification. For AM radiographs (new), the process involves selecting landmarks, extracting quantitative features, calculating ratios, encoding the data, and storing them in the database. For PM radiographs (victim), the steps include selecting landmarks, extracting quantitative features, calculating ratios, and encoding the data. The final part, identification, involves auto-comparison. Only the selection of landmarks is manual; all other processes are automated.

2.2. Radiographic Landmark Selection for Encoding Cranial AM Structural Patterns

During this stage, the craniofacial bone structure patterns of individuals in the AM phase are encoded using lateral cephalograms. These cephalograms provide valuable information for identification. In this algorithm, eight key landmarks were defined. These landmarks are routinely used in cephalometric analysis within orthodontics and maxillofacial surgery to diagnose skeletal abnormalities and create treatment plans. To use this algorithm, the practitioner first manually determines the landmarks, while the remaining processes are carried out automatically. The user (practitioner), typically an orthodontist, selects the key landmarks in the cranial region (Figure 2) in a specified order [42,43] (Table 1).
The midpoint between each pair of key landmarks, denoted as p1 and p2 with coordinates x1, y1, x2, and y2, was determined using the following formulas:
X m = x 1 + x 2 2
Y m = y 1 + y 2 2
Applying these formulas generates secondary landmarks at the midpoint between each pair of primary landmarks. Additionally, tertiary landmarks can be interpolated between the primary and secondary landmarks. This process creates a list of coordinates for these points, facilitating subsequent analysis stages. These points can define various distances, angles, and triangles within the cranial region, allowing for the extraction of individual identification characteristics. The following sections outline the methods for extracting and storing these features.

2.3. Geometric Measurement of Craniofacial Bone Structures

To encode the structure of the cranium, it is necessary to calculate various features such as the length, angle, and area of different regions within the cranium, including measurements between all landmarks. Applying these methods resulted in the generation of three lists containing values (Table 2). The output includes three lists of values related to the distance (D), angle (Ag), and area (Ar). The order of items in the lists is predefined and not random.

2.4. Ratio Calculation

To address the differences in magnification between AM and PM radiographs, ratios were used to rescale the values and eliminate magnification discrepancies. Additionally, the encoding process requires that the ratios fall within the range of 0 to 1. In this algorithm, the ratios for the lists D, Ag, and Ar are calculated by dividing each element by every other element within the same list, creating new lists RD, RAg, and RAr with values ranging from 0 to 1 (Figure 3; Table 3). The order of items in these lists is fixed and follows a predefined sequence.

2.5. Encoding

In this stage, we used a specific encoding method to represent values between 0 and 1 with intervals of 0.1, 0.05, or 0.02. Table 4 shows the encoding at 0.1 intervals.
The encoding process was conducted as follows:
  • Encoder_1 = Code (RD) + Code (RD) + Code (RD × RAg) + Code (RD × RAr).
  • Encoder_2 = Code (RAg) + Code (RAg) + Code (RAg × RD) + Code (Rag × RAr).
  • Encoder_3 = Code (RAr) + Code (RAr) + Code (RAr × RD) + Code (RAr × RAg).
  • Combination (string) = Encoder_1 + Encoder_2 + Encoder_3.
To illustrate Encoder_1, consider an element in the RD list with a value of 0.72, encoded as (H) and duplicated as (HH). Next, we compute the product of each RD element with the corresponding RAg element (RD × RAg), encode the product, and append it to the existing code. For example, if the product is 0.16, it is encoded as (B) and appended to form (HHB). This process is repeated for the products of RD × RAr, with these values also encoded and appended. Encoder_1 processes each value sequentially from the first to the last element in the RD, RAg, and RAr lists. Encoder_2 and Encoder_3 follow a similar approach, resulting in three lists of character strings.
Two regularizers are introduced in this stage:
  • Regularizer 1: This regularizer addresses the algorithm’s sensitivity to less stable cranial structures, such as the mandibular position (lower jaw). The position of the mandible can change over time due to factors like tooth loss, decay, and minor growth during adulthood [46], which can affect its reliability for identity recognition and reduce accuracy. To mitigate this issue, the regularizer assigns greater weight to measurements of more stable structures, such as the maxilla and cranial base, and less weight to the mandible (Figure 2).
  • Regularizer 2: This regularizer defines interval values for encoding, with options for 2%, 5%, or 10% intervals. For example, values can be encoded in 2%, 5%, or 10% intervals, where values from 0 to 0.02 or 0 to 0.05 are assigned a specific code (e.g., code A). The same applies to the 10% interval. By default, a 5% interval is used, balancing between hypersensitivity (2%) and hyposensitivity (10%) in the encoding process.

2.6. Storage of Encoded Cephalogram Data

In this algorithm, each individual’s cranial patterns are encoded into character strings and stored in the database along with their corresponding identities. Figure 3 visually illustrates how the K-VIN algorithm converts skull patterns into lists of character strings.

2.7. Victim Identification Stage

At this stage of the algorithm, the PM cephalogram is imported, and the user identifies the main landmarks. Secondary landmarks, distances, areas, angles, and their ratios are then automatically generated, encoded, and stored temporarily as victim_encoded. In the next stage, victim_encoded is compared to the stored cases in the database. This comparison is performed in a loop, matching the victim’s encoded values against each database case to identify the case with the highest similarity.
The Levenshtein distance method was utilized to measure the difference between the victim string and the stored strings in the database [47]. This method calculates the minimum number of edits required to transform one string into another. For instance, the Levenshtein distance between “ABCD” and “ABCE” is 1, indicating a similarity score of 75%. A smaller difference indicates a higher similarity between the two strings. The search process continues until the case with the highest level of similarity to the victim code is found in the database.
S i m i l a r i t y = 1 L e v e n s h t e i n   d i s t a n c e Max   ( L e n g t h   o f   s o u r c e ,   L e n g t h   o f   T a r g e t )
Once a similar case is identified in the database, the search halts, and the name (and information) of the matching case is output. This case represents the individual most similar to the victim. If the similarity of any case is below the default threshold of 90%, the search continues for a better match. If the similarity exceeds 90%, the Auto Error Reduction (AER) function is activated. The operator can adjust the sensitivity level of this threshold. Additionally, the system can generate a list of similar cases, arranged in descending order of similarity.
It is reported that acceptable accuracy levels for cephalometric landmark selection are 0.59 mm (x coordinate) and 0.56 mm (y coordinate) in total error for diagnostic purposes [48]. Additionally, the reported radius area for the selection landmarks in previous studies was about 2 mm [49]. The Auto Error Reduction (AER) function is designed to minimize user-induced errors in the algorithm. It addresses potential discrepancies between landmarks identified in the PM cephalogram and those in the AM cephalogram of the same individual. Initially, the user selects landmark points from the AM cephalogram, and the algorithm processes these points to encode the individual’s identity in the database. If the same individual later becomes a victim, their PM cephalogram is imported into the software and analyzed by either the same or a different orthodontist. The AER function helps minimize the impact of user errors during the victim analysis phase. If a case in the database shows high similarity but less than 100%, it indicates that the algorithm likely identified the victim correctly. However, the similarity score may fall short of 100% due to potential user errors and variations in landmark selection between the AM and PM radiographs.
In the AER function, a region of interest with a radius of 4 pixels is considered around each key landmark point, rather than focusing on a single point (Figure 4). This approach accounts for the potential variation in landmark placement.
In the AER function, random points are generated within the landmark area, with a default radius of 4 pixels. The algorithm is executed for each set of points to compute the similarity. If the similarity improves, the updated landmark points are stored, and the process continues until the similarity reaches its maximum value. This iterative process is repeated between 100 and 1000 times by default, allowing for potential improvements in the similarity percentage. The radius of the landmark area can be adjusted by the user.

2.8. Algorithm Testing (Experimental Setup)

A software application was developed using C# in Microsoft Visual Studio 2019 to implement the K-VIN algorithm (Figure 5). Initial testing of the algorithm was conducted using a dataset of 400 pre- and post-treatment digital cephalograms of orthodontic patients (living individuals). Due to the lack of an AM cephalogram archive for deceased individuals, we used cephalograms from an orthodontic archive at a dental clinic. Pre-treatment cephalograms were designated as AM, and post-treatment cephalograms were designated as PM.
Cochran’s formula was used to calculate the sample size, yielding a minimum of 385 samples for 95% confidence and a 5% margin of error. To enhance the precision of the results, 400 samples were considered, each with an AM and a PM image. The samples consisted of orthodontic patients aged 18 and older, each with a minimum treatment duration of 2 years, collected from a dental clinic in Tehran, Iran. Inclusion criteria included the absence of skeletal changes such as maxillofacial surgery and the availability of high-quality pre- and post-treatment cephalometric images. Exclusion criteria comprised a history of jaw trauma during orthodontics, low-quality images, and treatments that caused significant mandibular changes, such as open-bite correction with molar intrusion or maxillary total arch distalization. Only records of patients who underwent orthodontic treatment (extraction or non-extraction) without significant skeletal modifications were included. All cephalometric images were resized to a width of 1000 pixels while maintaining the aspect ratio. The software auto-rotated the images to align the nasion–sella line (cranial base) at a 6-degree angle to the true horizontal. In the software, the process is defined such that the user first selects the nasion and sella points to enable the automatic rotation of the images. This aligns the nasion–sella line (cranial base) at a 6-degree angle to the true horizontal. After this initial selection, the user proceeds to select the remaining landmarks, ensuring consistent image alignment across all cases. The 6-degree adjustment corresponds to the angle between the true horizontal plane and the cranial base in the natural head position [2].
The pre-treatment cephalogram for each patient was labeled as the AM cephalogram, while the post-treatment cephalogram of the same patient was designated as the PM cephalogram. This allowed us to evaluate the software’s performance in identifying individual identities. Figure 6 illustrates the process of testing the algorithm used in this study.
We imported 358 AM cephalogram images into the software, selected landmarks, and processed them with the algorithm. Each individual’s cephalogram patterns were converted into string representations and stored in the database under their names (Figure 7).
After a two-week interval, PM cephalograms were imported and processed by the algorithm, recording identity recognition results and similarity percentages. The two-week gap helps prevent potential bias by ensuring that users do not remember specific landmark points from the initial analysis. The algorithm’s output was compared to the actual identities to calculate the accuracy index. To evaluate sensitivity and specificity, 42 cephalograms that were not included in the database were processed. Incorrect matches were recorded as false positives, while correct identifications of no match were recorded as true negatives. Figure 8 shows a sample test demonstrating the alignment of cranial patterns between the AM and PM cephalograms of the same individual. Figure 9 depicts the encoded representations of both the current individual and the matching individual from the database, as identified by the algorithm. Figure 10 illustrates the cranial patterns of two different individuals that do not align.
To assess intra-observer errors in selecting the key landmarks, 20 cephalometric images were randomly selected, and the landmarks were re-identified. The coordinates of each landmark before and after re-identification were compared using Euclidean distance.

3. Results

Among the 400 samples analyzed, 48% were female and 52% were male. The average duration between the two cephalograms was 2.58 ± 0.52 years. Table 5 shows the mean and standard deviation for age, initial similarity, and similarity after applying the AER function. A significant correlation was found between raw similarity values for correctly identified cases and those after applying the AER function (p < 0.001). The confusion matrix (Figure 11) shows that out of three hundred and fifty-eight cases in the database, three hundred and fifty were accurately identified (true positive), while eight were not (false negative). Out of the forty-two cases not included in the database, forty were correctly identified as not present (true negatives), while two were incorrectly identified as present (false positives). Statistical analysis revealed that the similarity values before and after applying AER did not follow a normal distribution (Kolmogorov–Smirnov test, p < 0.05). The Mann–Whitney U test revealed a significant difference between the two distributions (p < 0.001).
The analysis of intra-observer error revealed the following mean Euclidean distances (with standard deviations) between key landmarks in the initial and repeated measurements (Table 6 and Figure 12): Na: 3.07 ± 0.89, S: 4.07 ± 0.61, Or: 2.93 ± 0.65, Ar: 2.23 ± 0.90, ANS: 3.55 ± 0.67, PNS: 1.37 ± 0.75, Go1: 4.09 ± 0.73, Go2: 1.99 ± 0.69, and Me: 4.39 ± 0.69. The Me landmark had the highest mean distance, while the PNS landmark had the lowest. Figure 13 shows the coordinates of points in the initial and repeated measurements for a sample. On average, there was a difference of 1.96 ± 0.99 pixels between points in the two stages, with a 1.4% difference in encodings.

4. Discussion

The test results show that the algorithm is highly accurate in identifying individuals based on cranial patterns. Previous studies have explored the use of the frontal sinus, observed through lateral cephalograms [50,51,52], for identity recognition [22,50,53]. However, it should be noted that the accuracy of using the frontal sinus alone may be lower in two-dimensional radiographs than considering the entire cranium. Moreover, the dimensions of the frontal sinus change during growth. A study by Marsya et al. [54] demonstrated that age and gender could be estimated from the frontal sinus, and its growth could be observed until the age of 20. Thus, considering all cranial patterns—not solely the frontal sinus—could enhance accuracy in identity recognition. Similar to fingerprints or facial features, cranial structures may represent a unique identifier for individuals. A review article [55] delves into the possibility of human identification using both two-dimensional and three-dimensional images of the frontal sinus, highlighting the potential role of orthodontists in this field. However, the K-VIN algorithm employs a 2D cephalogram for identity recognition due to its simplicity and cost-effectiveness.
The K-VIN algorithm extracts geometric patterns from the cranium and converts them into character strings to perform identity recognition. It requires an available AM cephalogram for comparison. In situations involving mass casualties where soft tissue is completely lost, PM cephalograms can be obtained to facilitate identity recognition. The application of this algorithm, alongside other identification methods such as odontology [56] and genetics [57], is particularly relevant for high-risk occupations such as firefighting. Organizations, such as firefighting departments, can acquire cephalograms for their personnel. The K-VIN algorithm stores the cranial patterns of individuals in a database, and in the event of a mass casualty incident, a PM cephalogram can be obtained from a body to perform identity recognition based on the algorithm. Identity recognition is performed by identifying the “first matching case found in the database.” In this study, eight cases were incorrectly identified as the “first matching case found in the database”; however, they were the second match in the sorted list. Employing this algorithm allows for identity recognition in mass casualty incidents (given an initial database of individuals), and a sorted list of similarities from the highest to lowest can be generated within the software. The advantage of this method is the speed and cost-effectiveness of the identity verification process. Increasing the computational power of the algorithm, such as by expanding the number and size of ratios and enhancing the generated string, can potentially improve its accuracy by closer to 100%. The proposed method may also be useful to select possible identity suspects to which traditional identification methods can be applied.
In this study, the AER function was used to reduce landmark selection errors during the PM stage. This function automatically adjusts the landmarks to new positions (within a 4-pixel range) to increase the similarity with the AM sample. This adjustment aims to make the similarity measure less dependent on operator error in the PM stage. Although the exact coordinates of points in the AM and PM samples are not identical (due to potential differences in aspect ratio), AER modifies the PM coordinates to enhance similarity, within a 4-pixel range for each landmark. In this study, intra-observer error revealed that, on average, the Euclidean distance (in pixels) between repeated measurements for an observer was approximately 3.07 with a standard deviation of 0.73 pixels. This finding is consistent with the study by Hägg et al. [49]. However, future studies could fully automate the landmark selection process using artificial intelligence algorithms, such as key point detection models like Mediapipe, which is used for facial landmark detection in identity recognition [58].
Since the proposed method is metric-based, it is possible for multiple individuals to have relatively similar measurements. However, based on the results of this study, it appears that combining various geometric features extensively can enhance individual differences and increase the similarity of the same individual across different times. Additionally, the algorithm can generate a list of similar cases found in the database for identity screening. It has also been reported that the geometric pattern of cranial components varies among individuals [59,60,61]. This algorithm anticipates that the position of the mandible may change over time, thereby assigning less weight to the mandible’s features in the calculations. A study by Patil et al., which focused on personal identification in 100 living individuals with an average age of 25, showed that the frontal sinuses are unique to each person [52]. In this study, with a sample of 400 patients, we found that the overall cranial structure appears to be nearly unique for each individual. However, further research on cadavers is needed. One limitation of this algorithm is its reduced accuracy in cases where the patient’s jaw has been affected by skeletal fractures and displacements. Future studies should aim to enhance the algorithm’s ability to handle such cases.
The algorithm has several limitations. It struggles with identifying cases involving severe maxillofacial trauma. To address this, generating a virtual reconstructed view of the lateral cephalogram using advanced AI techniques might enhance the algorithm’s performance in such cases. However, it is important to emphasize that this is a foundational algorithm with substantial potential for further development and refinement. Another limitation is the lack of testing on deceased individuals due to the unavailability of antemortem cephalograms. Therefore, pre- and post-treatment images from an orthodontic archive were used for testing. Additionally, the accuracy of landmark selection depends on user input, which could be improved by incorporating AI methods such as key point detection [62]. In the field of dental and maxillofacial diagnostics, AI has demonstrated substantial progress in automating the analysis of cephalometric data [63]. AI algorithms are now used to automatically detect and measure cephalometric landmarks, which are crucial for diagnosing orthodontic conditions and orthognathic surgery. By employing AI techniques, it would be possible to consider more key landmarks, leading to more detailed feature calculations and improving the overall accuracy of the algorithm. A proposed design for future works could involve using advanced machine learning models, such as convolutional neural networks (CNNs), specifically trained on large datasets of cephalometric images to identify and classify key landmarks with high precision. While there are advanced AI systems in medical imaging and diagnostics, few directly address the forensic application of cranial pattern recognition from lateral cephalometric images. More recently, CNNs have shown promise in classifying sagittal skeletal patterns, with DenseNet161 achieving the highest accuracy [64]. Clustering techniques have also been effective in identifying craniofacial morphological patterns using multivariate cephalometric data [65]. However, despite these advancements, the application of AI in forensic identification using cephalometric radiographs remains largely unexplored. This gap presents an intriguing research opportunity to integrate these emerging AI techniques with our algorithm, which could advance identity verification methods through cranial pattern analysis.
Despite these limitations, the algorithm shows considerable potential for further development and integration with AI technologies. Future studies should validate the algorithm with multiple bodies using available antemortem and postmortem cephalograms. Incorporating AI for automatic landmark identification could enhance the algorithm’s framework. Comparing the K-VIN algorithm with other identification methods will help understand its performance and strengths better. This algorithm may have applications in forensic identification, orthodontics, and anthropological research, aiding in identifying missing persons, evaluating treatment outcomes, and analyzing craniofacial variations. It could also serve as a foundation for advancements in imaging and analysis technologies, particularly when integrated with AI techniques.

5. Conclusions

In conclusion, the K-VIN algorithm demonstrates exceptional accuracy, sensitivity, and specificity, making it a highly promising tool for forensic medicine. By analyzing cranial patterns from lateral cephalometric radiographs, the algorithm improves upon existing methods that often rely solely on frontal sinus features. It encodes and compares geometric patterns from AM and PM images for identification. Despite its promising results, challenges such as handling severe trauma cases and variability in manual landmark selection remain. Future research should focus on integrating automated landmark selection through advanced AI techniques, such as key point detection, to further refine the algorithm’s accuracy and efficiency. Testing on deceased individuals and expanding automated processes will be crucial for realizing the algorithm’s full potential.

6. Patents

The author S.K. is the inventor of the K_VIN algorithm, which is covered by Iranian Patent No. 109874. This patent is registered with the Iranian Patent Office. For more details, see the official patent record at the Iranian Patent Office (https://ipm.ssaa.ir/Search-Result?page=1&DecNo=140250140003000674&RN=109874, accessed on 15 August 2024).

Author Contributions

Conceptualization, S.K.; funding acquisition, S.K.; investigation, E.T.; methodology, S.K., M.M.K. and E.T.; project administration, M.Y. and E.T.; resources, E.T.; software, S.K.; supervision, M.Y. and E.T.; validation, M.M.K.; writing—original draft, S.K.; writing—review and editing, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the latest version of the Declaration of Helsinki and with the approval of the Research Ethics Committee of Baqiyatallah University of Medical Sciences (IR.BMSU.BAQ.REC.1402.014).

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code for this project (and algorithm) is available on GitHub and can be shared upon request.

Conflicts of Interest

The author S.K. is the inventor of the K_VIN algorithm, which is covered by Iranian Patent No. 109874. This patent is registered with the Iranian Patent Office. For more details, see the official patent record at the Iranian Patent Office. The other authors declare no conflicts of interest.

References

  1. Devereux, L.; Moles, D.; Cunningham, S.J.; McKnight, M. How important are lateral cephalometric radiographs in orthodontic treatment planning? Am. J. Orthod. Dentofac. Orthop. 2011, 139, e175–e181. [Google Scholar] [CrossRef]
  2. Proffit, W.R.; Fields, H.; Larson, B.; Sarver, D.M. Contemporary Orthodontics-E-Book; Elsevier Health Sciences: Amsterdam, The Netherlands, 2018. [Google Scholar]
  3. Cericato, G.; Bittencourt, M.; Paranhos, L. Validity of the assessment method of skeletal maturation by cervical vertebrae: A systematic review and meta-analysis. Dentomaxillofacial Radiol. 2015, 44, 20140270. [Google Scholar] [CrossRef]
  4. Alkofide, E. Pituitary adenoma: A cephalometric finding. Am. J. Orthod. Dentofac. Orthop. 2001, 120, 559–562. [Google Scholar] [CrossRef] [PubMed]
  5. Kolokitha, O.-E.; Topouzelis, N. Cephalometric methods of prediction in orthognathic surgery. J. Maxillofac. Oral Surg. 2011, 10, 236–245. [Google Scholar] [CrossRef] [PubMed]
  6. Ciaffi, R.; Gibelli, D.; Cattaneo, C. Forensic radiology and personal identification of unidentified bodies: A review. La Radiol. Medica 2011, 116, 960–968. [Google Scholar] [CrossRef] [PubMed]
  7. Stephan, C.N.; Winburn, A.P.; Christensen, A.F.; Tyrrell, A.J. Skeletal identification by radiographic comparison: Blind tests of a morphoscopic method using antemortem chest radiographs. J. Forensic Sci. 2011, 56, 320–332. [Google Scholar] [CrossRef]
  8. Niespodziewanski, E.; Stephan, C.N.; Guyomarc’h, P.; Fenton, T.W. Human identification via lateral patella radiographs: A validation study. J. Forensic Sci. 2016, 61, 134–140. [Google Scholar] [CrossRef]
  9. Reesu, G.V.; Woodsend, B.; Mânica, S.; Revie, G.F.; Brown, N.L.; Mossey, P.A. Automated Identification from Dental Data (AutoIDD): A new development in digital forensics. Forensic Sci. Int. 2020, 309, 110218. [Google Scholar] [CrossRef]
  10. Joshi, S.V.; Kanphade, R.D. Forensic approach of human identification using dual cross pattern of hand radiographs. In Proceedings of the Intelligent Systems Design and Applications: 18th International Conference on Intelligent Systems Design and Applications (ISDA 2018), Vellore, India, 6–8 December 2018; Volume 2, pp. 1075–1084. [Google Scholar]
  11. Bikker, J. Identification of missing persons and unidentified remains in disaster victim identification. Adv. Forensic Hum. Identif. 2014, 24, 37–58. [Google Scholar]
  12. Leo, C.; O’Connor, J.A.; McNulty, J. Combined radiographic and anthropological approaches to victim identification of partially decomposed or skeletal remains. Radiography 2013, 19, 353–362. [Google Scholar] [CrossRef]
  13. Jayakrishnan, J.M.; Reddy, J.; Kumar, R.V. Role of forensic odontology and anthropology in the identification of human remains. J. Oral Maxillofac. Pathol. JOMFP 2021, 25, 543. [Google Scholar] [CrossRef] [PubMed]
  14. Bastir, M.; Rosas, A.; O’Higgins, P. Craniofacial levels and the morphological maturation of the human skull. J. Anat. 2006, 209, 637–654. [Google Scholar] [CrossRef] [PubMed]
  15. Huang, G.J.; Graber, L.W.; Vanarsdall, R.L.; Vig, K.W. Orthodontics-E-Book: Current Principles and Techniques; Elsevier Health Sciences: St. Louis, MO, USA, 2016. [Google Scholar]
  16. Nie, X. Cranial base in craniofacial development: Developmental features, influence on facial growth, anomaly, and molecular basis. Acta Odontol. Scand. 2005, 63, 127–135. [Google Scholar] [CrossRef]
  17. Hallgrímsson, B.; Lieberman, D.E.; Liu, W.; Ford-Hutchinson, A.; Jirik, F. Epigenetic interactions and the structure of phenotypic variation in the cranium. Evol. Dev. 2007, 9, 76–91. [Google Scholar] [CrossRef]
  18. Jayaprakash, P.T.; Srinivasan, G. Skull sutures: Changing morphology during preadolescent growth and its implications in forensic identification. Forensic Sci. Int. 2013, 229, 166.e1–166.e13. [Google Scholar] [CrossRef]
  19. von Dorsche, S.H.; Fanghänel, J.; Kubein-Meesenburg, D.; Nägerl, H.; Hanschke, M. Interpretation of the vertical and longitudinal growth of the human skull. Ann. Anat. Anat. Anz. 1999, 181, 99–103. [Google Scholar] [CrossRef] [PubMed]
  20. Avelar, L.E.T.; Cardoso, M.A.; Bordoni, L.S.; de Miranda Avelar, L.; de Miranda Avelar, J.V. Aging and sexual differences of the human skull. Plast. Reconstr. Surg.–Glob. Open 2017, 5, e1297. [Google Scholar]
  21. Pereira, J.G.D.; Santos, J.B.S.; Sousa, S.P.d.; Franco, A.; Silva, R.H.A. Frontal sinuses as tools for human identification: A systematic review of imaging methods. Dentomaxillofacial Radiol. 2021, 50, 20200599. [Google Scholar] [CrossRef] [PubMed]
  22. Reichs, K.J. Quantified comparison of frontal sinus patterns by means of computed tomography. Forensic Sci. Int. 1993, 61, 141–168. [Google Scholar] [CrossRef]
  23. Uthman, A.T.; Al-Rawi, N.H.; Al-Naaimi, A.S.; Tawfeeq, A.S.; Suhail, E.H. Evaluation of frontal sinus and skull measurements using spiral CT scanning: An aid in unknown person identification. Forensic Sci. Int. 2010, 197, 124.e1–124.e7. [Google Scholar] [CrossRef]
  24. Moore, K.; Ross, A. Frontal sinus development and juvenile age estimation. Anat. Rec. 2017, 300, 1609–1617. [Google Scholar] [CrossRef] [PubMed]
  25. Sardi, M.L.; Joosten, G.G.; Pandiani, C.D.; Gould, M.M.; Anzelmo, M.; Ventrice, F. Frontal sinus ontogeny and covariation with bone structures in a modern human population. J. Morphol. 2018, 279, 871–882. [Google Scholar] [CrossRef]
  26. Butaric, L.N.; Campbell, J.L.; Fischer, K.M.; Garvin, H.M. Ontogenetic patterns in human frontal sinus shape: A longitudinal study using elliptical Fourier analysis. J. Anat. 2022, 241, 195–210. [Google Scholar] [CrossRef]
  27. Yoshino, M.; Miyasaka, S.; Sato, H.; Seta, S. Classification system of frontal sinus patterns by radiography. Its application to identification of unknown skeletal remains. Forensic Sci. Int. 1987, 34, 289–299. [Google Scholar] [CrossRef]
  28. Beaini, T.L.; Duailibi-Neto, E.F.; Chilvarquer, I.; Melani, R.F. Human identification through frontal sinus 3D superimposition: Pilot study with Cone Beam Computer Tomography. J. Forensic Leg. Med. 2015, 36, 63–69. [Google Scholar] [CrossRef]
  29. Carvalho, Y.; Jacometti, V.; Franco, A.; Da Silva, R.; Silva, R. Postmortem Computed Tomography of the Skull for Human Identification Based on the Morphology of Frontal Sinuses. Рoссийский Электрoнный Журнал Лучевoй Диагнoстики 2019, 9, 170–176. [Google Scholar] [CrossRef]
  30. Mohan, G.; Dharman, S. Sex determination and personal identification using frontal sinus and nasal septum–A forensic radiographic study. Prof. RK Sharma 2019, 13, 125. [Google Scholar] [CrossRef]
  31. Silva, R.F.; Rodrigues, L.G.; Manica, S.; do Rosario Junior, A.F. Human identification established by the analysis of frontal sinus seen in anteroposterior skull radiographs using the mento-naso technique: A forensic case report. RBOL-Rev. Bras. Odontol. Leg. 2019, 6, 1. [Google Scholar]
  32. Gómez, Ó.; Mesejo, P.; Ibáñez, Ó.; Valsecchi, A.; Bermejo, E.; Cerezo, A.; Pérez, J.; Alemán, I.; Kahana, T.; Damas, S. Evaluating artificial intelligence for comparative radiography. Int. J. Leg. Med. 2024, 138, 307–327. [Google Scholar] [CrossRef]
  33. Palamenghi, A.; Borlando, A.; De Angelis, D.; Sforza, C.; Cattaneo, C.; Gibelli, D. Exploring the potential of cranial non-metric traits as a tool for personal identification: The never-ending dilemma. Int. J. Leg. Med. 2021, 135, 2509–2518. [Google Scholar] [CrossRef]
  34. Dosi, T.; Vahanwala, S.; Gupta, D. Assessment of the effect of dimensions of the mandibular ramus and mental foramen on age and gender using digital panoramic radiographs: A retrospective study. Contemp. Clin. Dent. 2018, 9, 343–348. [Google Scholar]
  35. Albalawi, A.S.; Alam, M.K.; Vundavalli, S.; Ganji, K.K.; Patil, S. Mandible: An indicator for sex determination–A three-dimensional cone-beam computed tomography study. Contemp. Clin. Dent. 2019, 10, 69–73. [Google Scholar] [PubMed]
  36. Bozkurt, M.H.; Karagol, S. Statistical elimination based approach to jaw and tooth separation on panoramic radiographs for dental human identification. Multimed. Tools Appl. 2023, 82, 32117–32150. [Google Scholar]
  37. Manigandan, T.; Sumathy, C.; Elumalai, M.; Sathasivasubramanian, S.; Kannan, A. Forensic radiology in dentistry. J. Pharm. Bioallied Sci. 2015, 7, S260–S264. [Google Scholar]
  38. Ferreira Silva, R.; Fortes Picoli, F.; de Lucena Botelho, T.; Gomes Resende, R.; Franco, A. Forensic identification of decomposed human body through comparison between ante-mortem and post-mortem CT images of frontal sinuses: Case report. Acta Stomatol. Croat. Int. J. Oral. Sci. Dent. Med. 2017, 51, 227–231. [Google Scholar]
  39. Nikam, S.S.; Gadgil, R.M.; Bhoosreddy, A.R.; Shah, K.R.; Shirsekar, V.U. Personal identification in forensic science using uniqueness of radiographic image of frontal sinus. J. Forensic Odonto-Stomatol. 2015, 33, 1. [Google Scholar]
  40. Dedouit, F.; Savall, F.; Mokrane, F.; Rousseau, H.; Crubézy, E.; Rougé, D.; Telmon, N. Virtual anthropology and forensic identification using multidetector CT. Br. J. Radiol. 2014, 87, 20130468. [Google Scholar]
  41. Re, G.; Argo, A.; Midiri, M.; Cattaneo, C. Radiology in Forensic Medicine from Identification to Post-Mortem Imaging; Springer: Cham, Switzerland, 2020. [Google Scholar]
  42. Proffit, W.R.; Fields, H.W.; Sarver, D.M. Contemporary Orthodontics; Elsevier: São Paulo, Brasil, 2007. [Google Scholar]
  43. Phulari, B. An Atlas on Cephalometric Landmarks; JP Medical Ltd.: Hong Kong, China, 2013. [Google Scholar]
  44. Rakosi, T.; Jonas, I.; Graber, T.M. Color atlas of dental medicine, Orthodontic-Diagnosis. Am. J. Orthod. Dentofac. Orthop. 1994, 105, 613. [Google Scholar]
  45. Weisstein, E.W. Heron’s Formula. 2003. Available online: https://mathworld.wolfram.com/HeronsFormula.html (accessed on 15 August 2024).
  46. Ozturk, C.N.; Ozturk, C.; Bozkurt, M.; Uygur, H.S.; Papay, F.A.; Zins, J.E. Dentition, bone loss, and the aging of the mandible. Aesthet. Surg. J. 2013, 33, 967–974. [Google Scholar] [CrossRef]
  47. Zhang, S.; Hu, Y.; Bian, G. Research on string similarity algorithm based on Levenshtein Distance. In Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 25–26 March 2017; pp. 2247–2251. [Google Scholar]
  48. Trpkova, B.; Major, P.; Prasad, N.; Nebbe, B. Cephalometric landmarks identification and reproducibility: A meta analysis. Am. J. Orthod. Dentofac. Orthop. 1997, 112, 165–170. [Google Scholar] [CrossRef]
  49. Hägg, U.; Cooke, M.S.; Chan, T.C.; Tng, T.T.; Lau, P.Y. The reproducibility of cephalometric landmarks: An experimental study on skulls. Australas. Orthod. J. 1998, 15, 177–185. [Google Scholar] [CrossRef]
  50. Christensen, A.M. Testing the reliability of frontal sinuses in positive identification. J. Forensic Sci. 2005, 50, JFS2004145. [Google Scholar] [CrossRef]
  51. Da Silva, R.F.; Prado, F.B.; Caputo, I.G.C.; Devito, K.L.; de Luscena Botelho, T.; Júnior, E.D. The forensic importance of frontal sinus radiographs. J. Forensic Leg. Med. 2009, 16, 18–23. [Google Scholar] [CrossRef]
  52. Patil, N.; Karjodkar, F.R.; Sontakke, S.; Sansare, K.; Salvi, R. Uniqueness of radiographic patterns of the frontal sinus for personal identification. Imaging Sci. Dent. 2012, 42, 213–217. [Google Scholar] [CrossRef]
  53. Kirk, N.J.; Wood, R.E.; Goldstein, M. Skeletal identification using the frontal sinus region: A retrospective study of 39 cases. J. Forensic Sci. 2002, 47, 318–323. [Google Scholar] [CrossRef]
  54. Marsya, G.; Sasmita, I.S.; Oscandar, F. Overview of the frontal sinus anteroposterior size based on against lateral cephalometric radiographs chronological age as forensic identification. Padjadjaran J. Dent. 2017, 29, 2. [Google Scholar] [CrossRef]
  55. de Barros, F.; da Costa Serra, M.; Kuhnen, B.; Matos, R.A.; da Silva Fernandes, C.M. Orthodontic 2D and 3D frontal sinus imaging records: An important role in human identification. Res. Soc. Dev. 2021, 10, e49110313608. [Google Scholar] [CrossRef]
  56. Chiam, S.-L.; Page, M.; Higgins, D.; Taylor, J. Validity of forensic odontology identification by comparison of conventional dental radiographs: A scoping review. Sci. Justice 2019, 59, 93–101. [Google Scholar] [CrossRef]
  57. de Boer, H.H.; Blau, S.; Delabarde, T.; Hackman, L. The role of forensic anthropology in disaster victim identification (DVI): Recent developments and future prospects. Forensic Sci. Res. 2019, 4, 303–315. [Google Scholar] [CrossRef]
  58. Ghanbari, S.; Ashtyani, Z.P.; Masouleh, M.T. User identification based on hand geometrical biometrics using media-pipe. In Proceedings of the 2022 30th International Conference on Electrical Engineering (ICEE), Tehran, Iran, 17–19 May 2022; pp. 373–378. [Google Scholar]
  59. von Cramon-Taubadel, N. Evolutionary insights into global patterns of human cranial diversity: Population history, climatic and dietary effects. J. Anthropol. Sci. 2014, 92, 43–77. [Google Scholar]
  60. Silva, R.; Botelho, T.; Prado, F.; Kawagushi, J.; Daruge Júnior, E.; Bérzin, F. Human identification based on cranial computed tomography scan—A case report. Dentomaxillofacial Radiol. 2011, 40, 257–261. [Google Scholar] [CrossRef]
  61. Wang, J.-J.; Wang, J.-L.; Chen, Y.-L.; Li, W.-S. A post-processing technique for cranial CT image identification. Forensic Sci. Int. 2012, 221, 23–28. [Google Scholar] [CrossRef]
  62. Šavc, M.; Sedej, G.; Potočnik, B. Cephalometric landmark detection in lateral skull X-ray images by using improved SpatialConfiguration-Net. Appl. Sci. 2022, 12, 4644. [Google Scholar] [CrossRef]
  63. Polizzi, A.; Leonardi, R. Automatic cephalometric landmark identification with artificial intelligence: An umbrella review of systematic reviews. J. Dent. 2024, 146, 105056. [Google Scholar] [CrossRef]
  64. Li, H.; Xu, Y.; Lei, Y.; Wang, Q.; Gao, X. Automatic classification for sagittal craniofacial patterns based on different convolutional neural networks. Diagnostics 2022, 12, 1359. [Google Scholar] [CrossRef] [PubMed]
  65. Araya-Díaz, P.; Ruz, G.A.; Palomino, H.M. Discovering Craniofacial Patterns Using Multivariate Cephalometric Data for Treatment Decision Making in Orthodontics. Int. J. Morphol. 2013, 31, 1109–1115. [Google Scholar] [CrossRef]
Figure 1. The proposed algorithm.
Figure 1. The proposed algorithm.
Diagnostics 14 01840 g001
Figure 2. Multiple lines are drawn between the main landmarks (Na, S, Or, Ar, ANS, PNS, Go, and Me) and the secondary landmarks. This set of these points generates numerous distances, angles, and triangles within the cranial region. By defining these features, it is possible to extract individual identification characteristics. More geometric features are calculated in the upper regions compared to the lower regions (mandible). The ratio of features between the upper and lower regions can be customized by the user.
Figure 2. Multiple lines are drawn between the main landmarks (Na, S, Or, Ar, ANS, PNS, Go, and Me) and the secondary landmarks. This set of these points generates numerous distances, angles, and triangles within the cranial region. By defining these features, it is possible to extract individual identification characteristics. More geometric features are calculated in the upper regions compared to the lower regions (mandible). The ratio of features between the upper and lower regions can be customized by the user.
Diagnostics 14 01840 g002
Figure 3. Visual representation of the process of skull pattern-to-string conversion by the algorithm.
Figure 3. Visual representation of the process of skull pattern-to-string conversion by the algorithm.
Diagnostics 14 01840 g003
Figure 4. The AER function can generate random points within the landmark area (red circle). This process is applied to the key landmarks to account for potential variations.
Figure 4. The AER function can generate random points within the landmark area (red circle). This process is applied to the key landmarks to account for potential variations.
Diagnostics 14 01840 g004
Figure 5. User interface of the software developed in this study. The application automates all stages and processes (excluding key point selection). It provides outputs such as the best match, similarity scores (before and after AER), and a sorted list of the closest matches and displays anatomical landmarks for both the current and similar individuals (e.g., frontal sinus) to verify identity recognition.
Figure 5. User interface of the software developed in this study. The application automates all stages and processes (excluding key point selection). It provides outputs such as the best match, similarity scores (before and after AER), and a sorted list of the closest matches and displays anatomical landmarks for both the current and similar individuals (e.g., frontal sinus) to verify identity recognition.
Diagnostics 14 01840 g005
Figure 6. The process of testing the algorithm in this study.
Figure 6. The process of testing the algorithm in this study.
Diagnostics 14 01840 g006
Figure 7. Display of the system database. The first column lists the sample names (AM), and the second column shows the encodings generated by the algorithm.
Figure 7. Display of the system database. The first column lists the sample names (AM), and the second column shows the encodings generated by the algorithm.
Diagnostics 14 01840 g007
Figure 8. The K-VIN algorithm considers a greater number of ratios in the superior structures than in the inferior structures to minimize the impact of mandibular changes on identity recognition. The superimposition and pattern matching of the current case (green) onto a similar pattern found in the database (red) indicate that the algorithm has successfully performed identity recognition. Despite the similarity in the overall structure, there is a slight difference in mandibular positioning due to orthodontic treatment. It appears that orthodontic treatment has caused a minor change in the lower jaw position.
Figure 8. The K-VIN algorithm considers a greater number of ratios in the superior structures than in the inferior structures to minimize the impact of mandibular changes on identity recognition. The superimposition and pattern matching of the current case (green) onto a similar pattern found in the database (red) indicate that the algorithm has successfully performed identity recognition. Despite the similarity in the overall structure, there is a slight difference in mandibular positioning due to orthodontic treatment. It appears that orthodontic treatment has caused a minor change in the lower jaw position.
Diagnostics 14 01840 g008
Figure 9. Comparison of the generated string for current individual as PM (green) and the similar individual found in the database as AM (red).
Figure 9. Comparison of the generated string for current individual as PM (green) and the similar individual found in the database as AM (red).
Diagnostics 14 01840 g009
Figure 10. Comparative analysis of cranial patterns for two distinct individuals (non-overlapping).
Figure 10. Comparative analysis of cranial patterns for two distinct individuals (non-overlapping).
Diagnostics 14 01840 g010
Figure 11. The confusion matrix.
Figure 11. The confusion matrix.
Diagnostics 14 01840 g011
Figure 12. Box plot showing the distribution of Euclidean distances for each landmark in the first and repeated measurements across 20 randomly selected cases.
Figure 12. Box plot showing the distribution of Euclidean distances for each landmark in the first and repeated measurements across 20 randomly selected cases.
Diagnostics 14 01840 g012
Figure 13. Display of the coordinates of landmarks selected in the first (blue dots) and repeated measurements (red dots) for one of the twenty samples.
Figure 13. Display of the coordinates of landmarks selected in the first (blue dots) and repeated measurements (red dots) for one of the twenty samples.
Diagnostics 14 01840 g013
Table 1. The key landmarks in the lateral cephalometry used for this study.
Table 1. The key landmarks in the lateral cephalometry used for this study.
LandmarkDescription
Nasion (NA)The anterior point where the nasal and frontal bones intersect [42,43].
Sella (S)The midpoint of the pituitary fossa, also known as the sella turcica [42,43].
Orbitale (Or)The lowest point on the inferior margin of the orbit [42,43].
ANSAnterior nasal spine, the tip of the anterior nasal spine (sometimes modified as the point on the upper or lower contour of the spine where it is 3 mm thick) [42,43].
PNSThe posterior nasal spine, defined as the tip of the palatine bone’s posterior spine at the junction between the hard and soft palates [42,43].
Articulare (Ar)The point where the contour of the posterior surface of the mandibular condyle intersects with the temporal bone [42,43].
Gonion (Go)
  • The midpoint of the contour connecting the most inferior point of ramus (Go1) and the most posterior body of the mandible (Go2) [44].
  • The midpoint in the mediolateral dimension on the most posterior border of the mandible [42,43].
Menton (Me)The most inferior point on the chin [42,43].
Table 2. Geometric calculations of different regions within the cranium, between landmarks.
Table 2. Geometric calculations of different regions within the cranium, between landmarks.
ParameterDefinitionFormula
Distance (D)The Euclidean distance between each pair of landmarks, denoted as p1 and p2, respectively.
  • Output (list of values): D = {D1, D2, …, Dn}
D = ( p 2 . X p 1 . X ) 2 + ( p 2 . Y p 1 . Y ) 2
Angle ( θ ) The angle between every set of three landmarks (p1, p2, and p3) with coordinates (x1, y1), (x2, y2), and (x3, y3) can be calculated by the following:
First, vectors A and B are defined; then, their lengths are computed, followed by calculating their dot product, and finally, the angle ( θ ) between the three landmarks is obtained.
  • Output (list of values): Ag = { θ 1, θ 2, …, θ n}
A = x 1 x 2 , y 1 y 2
B = ( x 3 x 2 , y 3 y 2 )
A = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2
B = ( x 3 x 2 ) 2 + ( y 3 y 2 ) 2
A B = x 1 x 2 x 3 x 2 + y 1 y 2 y 3 y 2
θ = 180 π c o s   1 ( A B A B )
Area (Ar)The area of each triangle by each set of three landmarks can be determined using Heron’s formula [45]. To calculate the area of a triangle given the lengths of its sides (a, b, and c), first, the semi-perimeter (s) is defined; then, Heron’s formula is used to determine the area.
  • Output (list of values): Ar = {Ar1, Ar2, …, Arn}
s = a + b + c 2
A r = s ( s a ) ( s b ) ( s c )
Table 3. Ratio calculations.
Table 3. Ratio calculations.
ParameterDefinitionFormulaOutput (List of Values)
RDThe ratio of each line segment’s length to another was calculated and constrained between 0 and 1 by always dividing the smaller length by the larger length. R D = m i n   ( a , b ) m a x   ( a , b ) RD = {a1, a2, …, an}
0 < a <= 1
RAgThe ratio of each angle to another angle was calculated to obtain this index, ensuring values between 0 and 1 by always dividing the smaller angle by the larger angle. R A g = m i n   ( a , b ) m a x   ( a , b ) RAg = {b1, b2, …, bn}
0 < b <= 1
RArThe ratio of each triangle’s area to another triangle’s area was computed, ensuring values between 0 and 1 by always dividing the smaller area by the larger area. R A r = m i n   ( a , b ) m a x   ( a , b ) RAr = {c1, c2, …, cn}
0 < c <= 1
Table 4. The encoding scheme for values ranging from 0 to 1 (0.1 interval).
Table 4. The encoding scheme for values ranging from 0 to 1 (0.1 interval).
ValueCode
0–0.1A
0.1–0.2B
0.2–0.3C
0.3–0.4D
0.4–0.5E
0.5–0.6F
0.6–0.7G
0.7–0.8H
0.8–0.9I
0.9–1J
Table 5. Algorithm performance metrics on the test data, including similarity, accuracy, sensitivity, and specificity.
Table 5. Algorithm performance metrics on the test data, including similarity, accuracy, sensitivity, and specificity.
ParameterDefinitionResult
Age (years)Mean age ± standard deviation (SD) of samples22.21 ± 4.5
SimilarityMean similarity ± SD91.02 ± 2.6%
Similarity_AERMean similarity ± SD after applying the AER function98.10 ± 3.37%
Accuracy(TP + TN)/(TP + TN + FP + FN)0.975
SensitivityTP/(TP + FN)0.977
SpecificityTN/(TN + FP)0.952
Table 6. Euclidean distances between the landmarks from the first and repeated measurements in the intra-observer assessment for twenty randomly selected cephalometric samples.
Table 6. Euclidean distances between the landmarks from the first and repeated measurements in the intra-observer assessment for twenty randomly selected cephalometric samples.
NaSOrArANSPNSGo1Go2Me
14.2338244.2415153.0358551.6079742.953841.1044724.8964090.88584.854231
23.0417544.6317222.736023.5301733.3168641.4598433.5508590.9668534.613667
33.2798264.1219852.5732371.7563414.3027550.08544.447152.5400024.421004
43.6857954.3623482.1588481.8214044.1428992.227663.9143751.9151143.360346
53.0451923.9521482.2159522.2593473.7133352.1777294.0859071.8087733.816824
62.0136314.2971773.4052170.9850853.5951691.4492674.3031651.8383285.654552
74.7385874.1664253.4567171.4967164.7642130.5342244.5811351.9804664.912953
82.893024.1556142.1371533.5062024.7632740.5146163.1386562.6953814.352564
91.5519623.5962922.2957732.6290392.6853052.105844.1337922.3222593.718152
103.6143923.6090494.1070283.7399613.3437990.929525.9229592.6874143.890276
112.10543.3001653.3931951.3876953.0343951.7339764.3722081.7463933.356147
121.9136254.0787692.6124722.1886493.9860021.0623682.9009141.1177984.692608
132.2689083.2044264.1853952.028363.4063681.144044.6312371.2949143.921766
143.177773.4907243.1382761.9245873.4881552.1427053.9807332.2465114.373506
153.0744644.855463.2415343.691653.6005551.5568443.8247731.2726214.184124
163.4784642.8384551.8298142.500613.1102360.7457514.0585752.3363195.099332
172.8359424.6138183.1041212.7955062.589390.6455313.9490991.5523773.230014
184.8831495.0297363.5641320.3804463.330893.0662212.8010933.54814.848982
192.4941615.0705832.6165392.1778642.4928371.1436354.838742.3282055.04226
202.9796273.8834712.7721582.2070894.3967351.7294983.3926022.7780845.392706
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kavousinejad, S.; Yazdanian, M.; Kanafi, M.M.; Tahmasebi, E. A Novel Algorithm for Forensic Identification Using Geometric Cranial Patterns in Digital Lateral Cephalometric Radiographs in Forensic Dentistry. Diagnostics 2024, 14, 1840. https://doi.org/10.3390/diagnostics14171840

AMA Style

Kavousinejad S, Yazdanian M, Kanafi MM, Tahmasebi E. A Novel Algorithm for Forensic Identification Using Geometric Cranial Patterns in Digital Lateral Cephalometric Radiographs in Forensic Dentistry. Diagnostics. 2024; 14(17):1840. https://doi.org/10.3390/diagnostics14171840

Chicago/Turabian Style

Kavousinejad, Shahab, Mohsen Yazdanian, Mohammad Mahboob Kanafi, and Elahe Tahmasebi. 2024. "A Novel Algorithm for Forensic Identification Using Geometric Cranial Patterns in Digital Lateral Cephalometric Radiographs in Forensic Dentistry" Diagnostics 14, no. 17: 1840. https://doi.org/10.3390/diagnostics14171840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop