Next Article in Journal
Improved Subsynchronous Oscillation Parameter Identification Based on Eigensystem Realization Algorithm
Previous Article in Journal
Nonlinear Modeling and Transient Stability Analysis of Grid-Connected Voltage Source Converters during Asymmetric Faults Considering Multiple Control Loop Coupling
Previous Article in Special Issue
Integrating Structured and Unstructured Data with BERTopic and Machine Learning: A Comprehensive Predictive Model for Mortality in ICU Heart Failure Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Potential of AI-Powered Face Enhancement Technologies in Face-Driven Orthodontic Treatment Planning

by
Juraj Tomášik
1,*,
Márton Zsoldos
2,*,
Kristína Majdáková
3,
Alexander Fleischmann
3,
Ľubica Oravcová
1,
Dominika Sónak Ballová
4 and
Andrej Thurzo
1,*
1
Department of Orthodontics, Regenerative and Forensic Dentistry, Faculty of Medicine, Comenius University in Bratislava, 81102 Bratislava, Slovakia
2
Department of Orthodontics and Paediatric Dentistry, Faculty of Dentistry, University of Szeged, H-6720 Szeged, Hungary
3
Faculty of Medicine, Masaryk University in Brno, 62500 Brno, Czech Republic
4
Department of Mathematics and Descriptive Geometry, Faculty of Civil Engineering, Slovak University of Technology in Bratislava, 81005 Bratislava, Slovakia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7837; https://doi.org/10.3390/app14177837
Submission received: 19 July 2024 / Revised: 26 August 2024 / Accepted: 2 September 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Artificial Intelligence in Medicine and Healthcare)

Abstract

:

Featured Application

AI-powered face enhancement technologies have found their role in orthodontic treatment planning. Our study has shown that AI is able to modify face pictures in an attractive way, and most such changes can be achieved by an orthodontic treatment. Such modifications could serve as a guide for orthodontists to achieve improved facial harmony in their patients.

Abstract

Improving one’s appearance is one of the main reasons to undergo an orthodontic therapy. While occlusion is important, not just for long-term stability, aesthetics is often considered a key factor in patient’s satisfaction. Following recent advances in artificial intelligence (AI), this study set out to investigate whether AI can help guide orthodontists in diagnosis and treatment planning. In this study, 25 male and 25 female faces were generated and consequently enhanced using FaceApp (ver. 11.10, FaceApp Technology Limited, Limassol, Cyprus), one of the many pictures transforming applications on the market. Both original and FaceApp-modified pictures were then assessed by 441 respondents regarding their attractiveness, and the pictures were further compared using a software for picture analyses. Statistical analysis was performed using Chi-square goodness of fit test R Studio Studio (ver. 4.1.1, R Core Team, Vienna, Austria) software and the level of statistical significance was set to 0.05. The interrater reliability was tested using Fleiss’ Kappa for m Raters. The results showed that in 49 out of 50 cases, the FaceApp-enhanced pictures were considered to be more attractive. Selected pictures were further analyzed using the graphical software GIMP. The most prominent changes were observed in lip fullness, eye size, and lower face height. The results suggest that AI-powered face enhancement could be a part of the diagnosis and treatment planning stages in orthodontics. These enhanced pictures could steer clinicians towards soft-tissue-oriented and personalized treatment planning, respecting patients’ wishes for improved face appearance.

1. Introduction

Beauty has been very important for humans for tens of thousands of years, as evidenced by the first figurative artwork from 45,500 years ago [1]. What exactly beauty is has been a very subjective and ever-changing concept [2]. Different national and ethnic groups perceive facial beauty differently [3,4,5]. However, beauty as a concept seems to have deep biological roots [6]. In clinical practice, if one wants to meet patient expectations, they must provide them with up-to-date services when it comes to, e.g., dental and facial aesthetics and oral function [7,8,9].
Facial beauty is one of the most significant elements in dentistry (and perhaps of overall beauty): after all, beauty is power and a smile is its sword [10,11,12]. Nowadays, the focus of patients regarding their smile has shifted from function to aesthetics [4,13]. Orthodontics plays a special role when it comes to smile aesthetics, because it can help change the smile in a way that prosthetic dentistry alone cannot, often without compromising the result. Face aesthetics improvement is the most frequent reason to undergo an orthodontic therapy [3,4,14]. Notwithstanding, an orthodontist needs to remember that even though aesthetic perception varies through history and among cultures, races, and individuals, a properly functioning occlusion, as characterized by strict morphological features, is a rather stable concept [5,15,16,17,18,19,20]. To enhance the overall facial aesthetics of a patient, orthodontists traditionally need to master, as much as possible, the fundamentals of an aesthetic face and smile.
Despite the subjective essence of beauty, there are some facial features that are generally considered aesthetic, and they are related to symmetry and proportionality. Throughout human history, scientists and philosophers have studied beauty, and in turn elucidated aesthetic principles that seem to be universally accepted. The Golden ratio (with an approximate value of 1.618) and the Vitruvian man are two well-known examples [3,17]. In aesthetic analysis, the craniofacial height to overall height ratio is important, with the ratios of 1:7.5 and 1:8 being considered most attractive [21]. The face height to width ratio is also a value closely linked to aesthetics. Values of 88.5% (±5.1) and 86.2% (±46) for young males and females, respectively, are considered most attractive [3]. Ricketts’ aesthetic line—a tangent to nose and chin—has a specific relation to lips. Ideally, the upper and lower lip should be dorsal to this line by 4.3 ± 2 mm and 2 ± 2 mm, respectively [22]. Leonardo da Vinci identified several human aesthetic concepts, including the following [21,23]:
  • A human ear and nose should be equally long;
  • The mouth width should be equal to the chin-to-lips distance;
  • The distance between the chin and the hairline should be 1/10 of the body height, the head should be 1/8 of the body height;
  • Chin, nostrils, eyebrows, and hairline should enclose three equal facial thirds.
The proportions and alignment of individual teeth are also an important aspect of facial aesthetics, and the Golden ratio can be observed repeatedly. In en face view, the central incisors’ height and width form a Golden rectangle. The width ratios of central and lateral incisors, lateral incisor and canine, and canine and first premolar are all Golden [15]. On top of that, new golden ratios have been studied: the eye-to-eye distance should be 46% of the face width and the eyes-to-mouth vertical distance should be 36% of the face height, so the overall concept of facial beauty is getting more and more complex [24]. All in all, keeping these figures and relationships in check is a strenuous mental exercise.
Orthodontists conduct clinical examinations of each patient who is willing to get a treatment—either to correct their malocclusion and/or to enhance their appearance. Diagnosis is the single most important part of the entire treatment. Unless diagnosis is set right, the whole treatment will never address all the patient’s needs. It is not surprising that facial attractiveness assessments carried out by orthodontists and laypersons vary significantly [25]. Still, patients readily assess their smile and teeth with reasonable accuracy when it comes to symmetry (interestingly, men being more accurate than women), so they should also be consulted and become a partner, rather than a mere subordinate, in the whole diagnosis and treatment process [26,27]. Proffit’s soft tissue paradigm has made the diagnosis even more complex, since the occlusion, although still important, is not the singular driving factor in current treatment concepts anymore [3,28]. The respect for soft tissues and overall appearance has received attention, which corresponds to people’s requirements regarding their appearance.
Grasping the difficult concept of human beauty poses a major challenge, not only for artists, but also for orthodontists. It appears, though, that an increase in complex information may facilitate some level of decoding of beauty. However, beauty by definition remains a very human concept, since it primarily focuses on being beautiful to other humans. With the advances in digital technologies, traditional examination methods, such as stone casts and analogue portraiture pictures, have become obsolete. Intraoral and extraoral scanning, CBCT (cone-beam computed tomography), and digital photography enable that all relevant information about a patient is easily stored in a computer, providing the trained professionals with a vast number of ways to process the information further. Digital measurements, automated cephalometric analysis, and virtual reality are just a few of all the features that enable an orthodontist to be more precise as well as more efficient in diagnosis.
Recent research concluded that artificial intelligence (AI) has been the most studied digital technology in orthodontics in the past 5 years [29]. AI can help orthodontists in a plethora of ways: automated landmark identification and cephalometric analysis, orthognathic surgery planning, soft-tissue changes prediction and assessment, digital picture classification, treatment duration prediction, medico-dental diagnostics, facial biotype classification, supernumerary teeth diagnosis and management, decision making on tooth extractions, diagnosis of temporomandibular joint disorders, facial growth prediction, and airway obstruction diagnosis [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52]. Using AI to propose an aesthetic result of the orthodontic treatment, however, still remains to be implemented.
The foundations of all these new possibilities within dentistry and orthodontics are the spectacular innovations in the field of computer science, data handling, and AI. The computer science behind these advances in AI is rooted in big data analysis, and there have been very significant developments in this field in modern times [53]. These technological advances led to many new, exciting applications in the fields of medicine and more broadly biology and technology—such as image classification based on scene-level semantic content [54,55,56,57,58,59]. In addition, AI can be used to communicate with patients and provide them with needed information, which saves human resources [60].
Studies have shown that there are parameters which are generally considered attractive, both in women and men. The most salient features of facial attractiveness were symmetry, eyebrow thickness, jawbone prominence, and face height [61,62,63]. As opposed to the very frequent general opinion, a study conducted by Przylipiak et al. [64] demonstrated with statistical significance that a smaller mouth was considered more attractive than full, thick lips. Most studies pertaining to face attractiveness use surveys, whereas some others use functional magnetic resonance imaging of specific regions of the brain, such as the caudate nucleus, orbitofrontal cortex, or amygdala. Based on such studies, it appears that facial proportions affect attractiveness assessment in a gender-specific manner [65].
With the advance of AI into the realm of picture editing, new possibilities present themselves in treatment planning. There have been plenty of applications on the market. Even decades ago, users could add various filters, frames, drawings, texts, or props to make their pictures more appealing. Even more so, picture enhancement technologies, such as directional wavelet transform, histogram equalization, algebraic reconstruction mode, cellular neural networks, partial differential equation, adaptive interpolation method, contrast stretching, alpha rooting, range compression, multi-frame super resolution, and adaptive iterative filtering, have been closely studied for over a decade [66]. Nowadays, AI-powered applications go even further. They combine image processing and machine learning to generally improve the appearance of facial features. The whole process comprises several steps [67]:
  • Face detection—faces are detected and located using specific face-detecting algorithms that seek and identify facial landmarks and boundaries.
  • Facial feature analysis—facial features (eyes, mouth, skin, nose, ears, etc.) are analyzed, for which the facial geometry needs to be understood.
  • Image processing—certain aspects of the face are enhanced, using various techniques (brightness, contrast, color hue changes, etc.).
  • Deep learning models—models trained on huge datasets of facial images are employed (namely convolutional neural networks) to grasp representations and patterns of facial features.
  • Feature enhancement—based on learned patterns, specific facial features are enhanced (e.g., nose size, wrinkle reduction, and lip enlargement).
  • Generative adversarial networks—in some cases, a generator and a discriminator interact in such a way that a generated image is evaluated regarding its authenticity, which yields even more realistic results.
  • Customization and user preferences—based on the application, the type as well as the extent of desired enhancement is adjusted.
  • Real-time processing—in case of video processing, multiple processes are employed at the same time.
  • Quality assessment—in the end, the application itself evaluates the outcomes so that the modifications lead to a visually pleasing and natural-looking outcome.
Based on an Internet search (carried out on 6 March 2024), some of the most popular AI-powered face-enhancing applications are: Remini (AI Creativity S.r.l., Milan, Italy), Clipdrop (INIT ML, Montreuil, France), Lensa (Prisma Labs, Inc., Sunnyvale, CA, USA), PhotoApp (ScaleUp, Urla, Turkey), VanceAI (VanceAI Technology, Limited, Hong Kong, China), FaceApp (FaceApp Technology Limited, Limassol, Cyprus), Let’s Enhance (Let’s Enhance, Inc., San Francisco, CA, USA), Voila AI Artist (Wemagine.AI LLP, Richmond, BC, Canada), Reflect (BrainFeverMedia LLC, West Chester, OH, United States), Fotor (Chengdu Hengtu Technology Co., Ltd, Chengdu, China), Topaz Photo AI (Topaz Labs LLC, Dallas, TX, USA), Adobe Photoshop (Adobe Inc., San Jose, CA, USA), PhotoWorks (AMS Software, Wake Forest, NC, USA), BeFunky (Befunky Inc., Portland, OR, USA), PicMonkey Retouching Tools (Shutterstock Inc., New York, NY, USA), Peachy—Face & Body Editor (Shantanu Pte. Ltd., Singapore) [68,69]. Based on our own research of the listed applications, FaceApp was chosen as the most user-friendly, popular, easy-to-use, and potent application that yields enhanced, yet realistic pictures. The application, released in 2017, uses deep learning—convolutional neural networks specifically—to process images [70]. FaceApp uses various algorithms that use complex data to extract statistical distribution of patterns, with the aim to predict an outcome for an input without being explicitly programmed to do so [71,72]. Thus, FaceApp is much more than a compilation of various filters. Each picture is translated into a multidimensional vector, which may be further adapted and redistributed throughout the neural network [73].
Even though the basic concept of machine learning is understandable, the hidden layers of deep learning algorithms make neural networks more complex and difficult to understand [74]. However, it seems that FaceApp succeeds in making most images more pleasant to the eye and attractive. It is not fully necessary to understand all the underlying processes for applying the AI in daily practice (including clinical practice)—as long as the work is completed and ethical concerns are addressed. Because of the scarcity of studies evaluating AI-enhanced pictures of faces on an attractiveness scale, this paper seeks to investigate whether face pictures are manipulated by the FaceApp algorithms in such a way that human assessors find them more attractive compared to their original counterparts. After all, it has been clear for some time now that machine learning algorithms are neither objective nor neutral technologies, and thus, they can lead to biases and errors with multiple implications [72].

2. Materials and Methods

For this preliminary study, 25 male and 25 female faces generated by the AI were downloaded from the website Generated Photos (Newark, Delaware, USA, https://generated.photos, accessed on 5 December 2023 for male pictures and on 14 December 2023 for female pictures) based on the following criteria [75]:
  • Age: young adult;
  • Pose: front facing;
  • Ethnicity: Caucasian;
  • Pose: natural.
The AI-generated faces were used to eliminate GDPR (general data protection regulation)-related issues and to assure the same projection of all faces (ideal light conditions, constant face angulation, and distance from the camera). The face pictures were downloaded and then further enhanced using AI. AI-powered face enhancement technologies, such as FaceApp, utilize deep learning to improve facial aesthetics by adjusting features such as symmetry and skin texture [70,71,72,73]. For this study, FaceApp (ver. 11.10, FaceApp Technology Limited, Limassol, Cyprus) with a premium account was used [76]. The “natural” filter for females (level 1) and “star” filter for males (level 1) were used because the team of three authors concluded these filters yielded the most realistic outcomes. The selection of female and male faces took place on 5 December 2023 and 14 December 2023, respectively, and based on chosen statistical methods, the number of generated faces was set to 100 pictures.
The goal of the study is to monitor the frequency of individual answers when choosing which picture is more attractive to the respondent as well as to monitor the frequency of point differences in the evaluation of two versions for a given face. For this purpose, the Chi-square goodness of fit test is used. The calculation of Chi-square goodness of fit test was performed by the function chisq.test from the stats package [77]. This test verifies whether the empirical probability distribution matches the given theoretical probability distribution. In other words, it measures how well a statistical model fits a set of observations.
In this study, when answering the question of which of the two versions of the pictures is more attractive to the respondent, the agreement with the theoretical distribution, where both options are represented equally often, is observed. The idea is similar for the point differences between an AI-edited pictures and the original picture. The differences were examined in this order, because in almost every case, a picture edited by artificial intelligence was evaluated as more attractive. Therefore, a higher score for these pictures options was expected. It means that the difference should be positive or equal to 0. In this case, however, the differences do not have only two options, as was the case when choosing a more attractive picture, but several options. Theoretically, these could be discrete values from −10 (the case where the edited picture has a score of 0 and the original has a score of 10) to 10 (where the edited photo would have 10 points and the original 0). In both cases, the correspondence of the distribution with a discrete uniform distribution is tested. The ggplot function from the package ggplot2 was used to display the plots [78].
Based on a discussion with a psychologist, who considered assessing 100 faces in one go as overly tiring and a potential threat to valid results, it was decided to create two separate online forms to be sent to 1800 respondents in total. The first form contained 50 pictures of male faces and the second online form distributed a week later contained 50 pictures of female faces. The forms were created so that each page showed one altered and one unaltered face picture. The respondents were asked to state their age by selecting one of four age groups. Then, they were supposed to evaluate which face looked more attractive and then to score each picture on a scale 1–10 (1—least attractive, 10—most attractive). To eliminate the tiredness of the assessors, the second form was sent to the respondents after one week. The scores were recorded, and numerical data were analyzed using the software R Studio (ver. 4.1.1, R Core Team, Vienna, Austria) and the level of statistical significance was set to α = 0.05 [79]. To assess the interrater reliability, Fleiss’ Kappa for m Raters was used [80]. For the Fleiss’ Kappa for m Raters, the kappam.fleiss from the irr package was used [81].
For the sample size calculation, trial questionnaire research on the research team and their co-workers was performed (a total number of 20 probands) in order to determine the presumed population proportion that would consider the AI-enhanced pictures more attractive. Afterwards, Equation (1) was used [82]:
S a m p l e   s i z e = z 2 × P × ( 1 P ) E 2 1 + z 2 × P × ( 1 P ) E 2 × N
where z is the z-score for the given confidence level (1.96 for 0.95 confidence level), P is the population proportion (0.9), E is the margin of error (0.05), and N is the population size (1800). After substituting into Equation (1), a sample size of 129 respondents was obtained, as demonstrated by Equation (2).
S a m p l e   s i z e = 1.96 2 × 0.9 × ( 1 0.9 ) 0.05 2 1 + 1.96 2 × 0.9 × ( 1 0.9 ) 0.05 2 × 1800 = 128.43
Five pairs of pictures of both male and female faces with highest attractiveness score differences before and after the AI modification were imported into the graphical software Gimp (ver. 2.10.36, Free Software Foundation, Inc., Boston, MA, USA) [83]. In total, 20% of the faces with the highest attractiveness differences were deliberately chosen to try to spot the differences, and correlated given measurements and facial attractiveness. Two layers (before and after AI enhancement) were superimposed, and by manipulating translucency, the distances of the selected facial anthropometric points (Table 1) were measured in both pictures and then compared. The anthropometric points to be studied were selected by an anthropologist. Percentual change for the selected parameters and mean values were calculated and analyzed.
In addition, to demonstrate the power of AI against the power of AI, ChatGPT 4 (OpenAI, San Francisco, CA, USA) was asked to compare each pair of the pictures [84]. The results were then analyzed in several ways:
  • Which of the two faces (AI-enhanced or original) was considered more appealing?
  • What was the difference of the numeric attractiveness score they achieved?
  • What facial modifications were responsible for the biggest score differences?
  • Were the observed changes located in the lower facial third?
  • Could an orthodontic treatment influence the studied changes?

3. Results

The first round of questionnaire (male faces) was submitted by 159 respondents. The gender and age composition of respondents is depicted in Figure 1 and Figure 2, respectively. The second round of questionnaire (female faces) was submitted by 282 respondents. The gender and age composition of the second-round respondents is depicted in Figure 3 and Figure 4, respectively.
Table 2 shows the percentage of respondents who judged the original or the AI-enhanced picture as the more attractive one—for 25 pairs of male and female faces each. Regarding the male faces assessment, all observations showed that the FaceApp-enhanced pictures were more attractive, and the observations were statistically significant (p < 0.05). For male faces, the interrater reliability rate was 0.01, which stands for a slight agreement between raters. In all female faces, except for one, the results were similar: the respondents judged the FaceApp-enhanced pictures as more attractive, and the statistical analysis showed statistical significance (p < 0.05). For female faces, the interrater reliability rate was 0.02. This number also indicated a slight agreement between raters. Neither age nor gender of the respondents was shown to have a significant impact on the attractiveness score.
The pair of male pictures (original and after the AI enhancement) and the attractiveness score difference for the pair is depicted in Figure 5 and Figure 6, respectively. The pair of female pictures (original and after the AI enhancement) and the attractiveness score difference for the pair is depicted in Figure 7 and Figure 8, respectively.
The five pairs of male and female faces with the highest difference between the attractiveness score of the original and AI-enhanced pictures were further analyzed to find out which anthropometric distances (Table 1) changed the most. The results for male faces and female faces are shown in Figure 9 and Figure 10, respectively.
The results show that in 49 out of 50 cases, the AI-enhanced picture was rated as more attractive compared to the original one. The results are statistically significant, with age and gender being of no statistical importance. On average, the AI-enhanced pictures of male and female faces were rated 1 point higher on the attractiveness scale from 1 to 10 (10 being the most attractive) than the original faces.
The facial changes that were correlated with the biggest changes in attractiveness score were most related to lips, eyes, nose, and chin (Figure 11). The most prominent changes were related to lip fullness (Li-Cph distance), followed by the eye size (Ps-Pi as well as Ect-Ect) and lower face height (Sl-Me as well as Prn-Me). Other important changes were related to nose width (Al-Al), lower jaw width (Go-Go), and mouth width (Ch-Ch). Less prominent changes were observed in the dimensions Ft-Ft, Zy-Zy, Sn-Me, Fz-Fz, N-Me, and Tr-Me. The order of most prominent changes was different for male and female faces. In male faces, the lip fullness, vertical eye dimension, distance between the nose tip and chin, and nose and mouth width changed the most. On the other hand, in female faces, the most prominent changes were observed in the lip fullness, vertical eye dimension, chin height, and facial horizontal dimensions at the eyes and eyebrows levels (Ect-Ect and Ft-Ft).
As noted by the research team, the pictures differed in skin texture, the enhanced versions being somewhat brighter and less wrinkled. The enhanced female eyes were bigger with darker contours. On the contrary, the enhanced male eyes decreased in the height, yet became more prominent.
Upon being asked about differences between an enhanced (first) and an original (second) face picture, ChatGPT 4 identified the following differences:
  • Hair: the first picture depicts more abundant and slightly wavy hair that is surrounding the face, as opposed to more smooth hair reaching behind the ears in the second picture.
  • Chin: the chin in the first picture is wider and more rounded, as opposed to a more prominent and narrower chin in the second picture.
  • Smile: the person in the first picture has a wider smile, showing more teeth.
  • Nose: the nose has a wider ala nasi area in the first picture.
  • Eyes: in the first picture, the eyes are more open and seem to be bigger than in the second picture.
  • Face: the face in the first picture has more rounded and smoother shape, whereas the face in the second picture has more rough and sharper edges.
  • Cheeks: the first face has fuller cheeks compared to the second face.

4. Discussion

In today’s world, our virtual representation is increasingly important; not just in professional but also in private life, how we present ourselves is increasingly important [85]. Increasingly more often, the first impression of us is from an online photo. This has led to widespread availability of photo-enhancing [86].
The practical handling of the complex cultural-psychological and philosophical question of human beauty has been successfully performed by AI [87]. Our research underlines these findings, since the ability to enhance photos is anchored in the capacity to estimate features which are considered beautiful. AI has been proven to be able to handle the question of beauty and extract measurements and evaluate its expected effect on facial beauty [88]. These are necessary steps to choose the features to be enhanced and support our findings of AI’s ability to handle the question of beauty with reasonable success. In addition, previous research found that an increase of attractiveness in digitally changed photos is to no insignificant proportion a result of the improved skin texture [89].
Our study found that AI-enhanced images were consistently rated as more attractive than original pictures, suggesting that these technologies align well with the general aesthetic preferences of humans. AI often increased the lower face height and lip fullness, and these values can be influenced with orthodontic tooth movements [90,91]. Previous research has shown that lower face height, when outside of a specific range for males and females, is a good predictor of an orthodontic treatment need [90].
The reason for these differences could be two-fold. The AI-generated male and female faces may differ from what is considered the average male and female face, respectively (based on the datasets used to train the AI). The other reason might be that a different enhancement filter was used for each of these two groups—because there were different enhancement filters available for each of the gender (as guessed by the application). While researching various face enhancement applications prior to this pilot study, we discovered that FaceApp offered the option to change the detected gender that for some female and male faces. In future research, it would be very interesting to choose an enhancement application that offers the same enhancement filter for all patients so that changes in both male and female faces can be compared without a different enhancement filter possibly affecting the results.
It appears that human judgement of beauty is a very complex mechanism, and thus, precisely analyzing it still poses a challenge [92]. The integration of AI-powered face enhancement technologies in orthodontic treatment planning represents a significant advancement in the field, and as a result, the number of studies on various orthodontic applications of AI and machine learning has increased exponentially [93]. These technologies, leveraging sophisticated algorithms and deep learning models, provide orthodontists with powerful tools to enhance diagnostic precision and treatment outcomes [29].
Importantly, many enhancements by AI are particularly in the lower facial third, which is the area that can be most changed by orthodontics [90,91,94]. This synergy suggests that AI can help orthodontists visualize potential treatment outcomes, improving communication and guiding patient’s acceptance. It may also support the patient’s decision making when it comes to virtual treatment plan objectives. However, this paper suggests that mere visualization of treatment outcomes is not the most crucial benefit of deep learning algorithms in orthodontics. Orthodontists could even go so far as to set their treatment goals based on the AI-enhanced pictures of patients. A patient comes for a consultation, their pictures are taken and processed using AI, and in a few moments, the clinician can have a roadmap or even ready-made solutions to the patient’s aesthetics-related challenges. However, as utopian this may sound, with huge amounts of data being fed into the system for some time, including the pictures and rigorous assessments of all inputs (including intraoral scans, X-rays or CBCT scans, dental and medical history, and even body height), with treatment plans for those patients designed by state-of-the-art orthodontists, deep learning can make sense of these data and in the future possibly provide the best possible treatment plans all by itself. Many more studies need to be well-designed and executed, and their results analyzed with emphasis on comparison with the current treatment standards and best practices, in order to move forward on the AI highway.
However, there are many underlying ethical, legal, and social implications to the use of AI in healthcare, and these need to be discussed further [95,96,97,98,99]. In fact, more than forty different ethical issues regarding the use of AI in dentistry were identified [100]. For example, reinforcing societal beauty biases and ensuring transparency in AI algorithms must be addressed. Privacy, anonymity, security, and informed consent are but a few of the most common challenges regarding novel digital technologies in dentistry [101]. Privacy and secrecy have special importance, since they are two cornerstones of healthcare ethics. The safety of data and ensuring privacy is of utmost importance in the field of healthcare. Most of today’s AI system use the uploaded data and interactions to learn and better themselves, thus possibly incorporating parts of the data and interaction in their database and later replies. This must be considered when uploading healthcare information to the AI. The most unacceptable situation, where AI learns about healthcare issues of a person and then later uses that data for queries of third parties, must be avoided stringently. Thus, any software company using AI to handle patient data needs to have implemented the strictest comprehensive privacy and data control protocols and consistently prioritize privacy in building and maintaining their AI. The company running the AI is responsible for the actions of that AI and the fate of any sensitive data uploaded to the system [74]. Another significant concern is the variability in AI algorithm performance across different populations and clinical settings, necessitating adaptation for diverse patient groups [102].
Another important aspect is that AI is, as of now, unable to consider how realistic the facial changes shown by the software are. The limitations of orthodontic treatment should be observed and respected, and the patient needs to be informed about possible treatment options and limitations from the very beginning, to prevent their disappointment and unrealistic expectations based on AI-powered face enhancement. This is especially true in case of non-surgical orthodontics’ effects on the face, but also when it comes to any orthodontic therapy [3]. AI enhanced pictures, like all predictions and plans, are inherently fickle and capricious. Implementing them is subject to many factors outside of the clinician’s control when it comes to removeable appliances, fixed appliances, and aligners. This is something patients should be aware of [74].
It may be a challenge to compare the new AI-assisted diagnosis and treatment planning with their traditional counterparts. Standard measurements of facial beauty and proportions up to this point were often based on older anthropometric data, which were limited by the number of inputs that could be handled by human researchers—the number of photos, the number of patients, etc. [3]. AI can handle and acquire much larger and complex modern datasets. Furthermore, it can continuously improve and adapt the modern trends and concepts of beauty since it is possible to add more and more data constantly [93]. However, the constant nature of traditional standard measurements may continue to offer a fix reference point, and thus, continue to provide a valuable source for consideration. The traditional datasets may also be more transparent than the decision making of AI [103]. Using AI-driven diagnosis and aesthetic planning helps to add a more distanced and more objective input; however, the final decisions should be made by the treating doctor, who should be cautious regarding the treatment plan limitations.
Our study has some limitations that need to be addressed as well. The participants were only asked to identify themselves with an age group. This was suggested by a psychologist because asking about one’s age might make feel some respondents uncomfortable, and have an effect on their scoring. However, having no exact data about age or profession of the respondents makes more complex analyses impossible. It is well known that one’s profession and experience influence one’s judgement. For example, orthodontists and laypersons give different importance to various facial features when it comes to a facial attractiveness assessment [25].
Another limitation was the total number of respondents. Out of 1800, only 159 and 282 respondents in the first and second round, respectively, submitted the completed forms. Given the limited number of respondents, it would have been very difficult to make more complex analyses that are proper and relevant, as the more categories of assessors there are, the higher the overall number of respondents is desired. Despite splitting the original form into two forms, it is possible that many respondents considered the form dull and repetitive, and gave up on it halfway through. A possible solution would be to fully explain the aim of the study to the participants to make them feel more involved and responsible for this research; however, this could also impact their judgement due to biases of various types [104,105].
Our research studied changes of AI-generated faces, not actual human faces. Naturally, the changes of human faces could be quite different from the changes of AI-generated faces, based on the mere fact that they just do not look the same. However realistic the generated faces are, they are different from real-world patients. As described in the section Materials and Methods, we did not ask for any faces with orthodontic anomalies to be generated. In our experience, however, patients with perfect teeth rarely come for diagnosis and treatment. Using AI-generated faces, thus, poses a possible limitation to our study, as the faces did not show any obvious orthodontic anomalies. On the contrary, all the original pictures were quite average faces [106]. Our choice was based on our aims of having a preferably neutral starting point, and on increased protection of personal data. When we upload photos to most current AI systems, we are also letting the AI learn from the uploaded data and our interactions with the AI [74]. Before our research, we did not know the clear benefits of AI facial enhancement of photos. Using actual photos and asking for the patients’ consent in this regard was not yet underpinned scientifically to a high enough level to support such actions. Therefore, we took an inspiration from the pharmaceutical industry, in which a computer-aided drug design is streamlined and successfully integrated, using computers and programs first and only later proceeding to other stages of drug discovery [107,108,109]. We modified AI-generated faces with AI algorithms to obtain results that could be potentially used as a base for our future study with actual patients. This research was a pilot study, so a rather simple study design was chosen. In more complex future research on this topic, all mentioned limitations should be prevented. Judging by the increase in the number of orthodontics and AI-themed articles, the role of AI is sure to increase with time [29]. Our research suggests using artificial intelligence as a tool for analyzing facial aesthetics, highlighting areas which could be improved, and discusses possible improvements are directions which merit further scientific research.
AI’s reliability and transparency is something we feel would benefit from further research and further focus should also be on concerning the protection of private data, which is of key importance in a healthcare settings. A possible area for further research is AI systems that use a local database without uploading, “learning from” and memorizing data of healthcare importance—so-called federated learning. In such a system, the model (the AI software) could be trained with data, without actually uploading them from a local computer.
AI has great potential to help humanity with boring or risky tasks, but also has potentially harmful effects [74,110]. AI can significantly affect an individual’s life when used to discover illness or evaluate risks. Further research should focus on the AI’s effect on both individuals and society. Since AI may have negative effects that are difficult to predict and control, researchers should focus on accountability, transparency, good software, and quality sources of information for AI to help AI reach its maximal potential, safety, and benefits [74,111]. Additionally, it would be interesting to match artificial intelligence both with smartphone applications or daily used computer software in order to test the reliability of AI-based programs in daily clinical practice more easily [112,113].
Future research could also focus on potential biases in the AI-driven improvements and the psychological effects of AI-enhanced images. Bilateral symmetry, averageness, facial normality, youthfulness, clean skin, sexual dimorphism, and the Golden ratio—these features and parameters greatly influence attractiveness [3,21,114,115,116]. Other research teams add facial profile and proportions to the attractiveness matrix [117,118]. Our results show that AI changes facial proportions and features in a favorable way, and thus, we suppose orthodontists could, after much technological developments, modify treatment plans according to the AI’s suggestions. After all, one could consider AI and its algorithms as crowdsourcing, taking models from various databases into account. In the past, crowdsourcing has been proven efficient, leading to novel knowledge [119]. Data collection remains to play a pivotal role in further research, and it may be wise to streamline modern methods of information retrieval.
However, AI has been proven to be biased before. AI uses huge datasets to train itself, and these datasets typically come from the Internet. As a result, they encode inequalities, stereotypes, and power asymmetries throughout society, as they are commonly trained on white faces [120]. Moreover, a phenomenon termed AI hyperrealism has emerged, which refers to the fact that AI-generated faces are perceived to be more “human” than actual human faces, and this points towards the unnaturally high credibility of AI, which is its disadvantage [106]. If the programming or other aspects of deep learning go awry or the AI is exposed to some challenges for the first time, automated treatment planning might produce undesired results. Following such incorrect directions by untrained providers could potentially harm patients and hinder in achieving desired treatment outcomes, not to mention patients’ unrealistic expectations from the very beginning, which could also have a detrimental effect on their compliance and willingness to continue the treatment. Being not trained enough is one of the reasons why some authors stress the importance of reviewing the results given by AI to prevent mistakes and damages [121]. For that, however, one needs to be an educated and skilled both clinician and analyst. That is why it seems that, for time being, orthodontists are indispensable and cannot be fully replaced by machines.
Without any doubt, AI has a long way to go to fully develop its potential for the good of all. Guidelines on AI development and use that fit human-rights-based frameworks need to be further studied and developed [122,123]. With careful integration, continued human oversight and input, and ethical management, AI technologies can revolutionize and greatly support orthodontic treatment planning, leading to more personalized and effective patient care.

Author Contributions

Conceptualization, J.T. and Ľ.O.; methodology, J.T., K.M. and A.F.; software, K.M., A.F. and D.S.B.; validation, J.T., M.Z. and A.T.; formal analysis, M.Z. and A.T.; investigation, J.T., Ľ.O., K.M. and A.F.; resources, J.T. and M.Z.; data curation, D.S.B.; writing—original draft preparation, J.T. and M.Z.; writing—review and editing, J.T., M.Z. and A.T.; visualization, J.T. and D.S.B.; supervision, A.T.; project administration, J.T., K.M. and A.F. Authors J.T. and M.Z. contributed to the article equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Kultúrna a Edukacná Grantová Agentúra MŠVVaŠ SR (KEGA)—grant No. 054UK-4/2023, Vedecká Grantová Agentúra MŠVVaŠ SR a SAV (VEGA)—grant No. 1/0036/23, and Agentúra na Podporu Výskumu a Vývoja (APVV)—grant No. APVV-21-0173.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. McDermott, A. What Was the First “Art”? How Would We Know? Proc. Natl. Acad. Sci. USA 2021, 118, e2117561118. [Google Scholar] [CrossRef] [PubMed]
  2. Manjula, W.S.; Sukumar, M.R.; Kishorekumar, S.; Gnanashanmugam, K.; Mahalakshmi, K. Smile: A Review. J. Pharm. Bioallied Sci. 2015, 7, S271–S275. [Google Scholar] [CrossRef]
  3. Proffit, W.R.; Fields, H.W.; Larson, B.; Sarver, D.M. Contemporary Orthodontics, 6th ed.; Mosby: St. Louis, MO, USA, 2018. [Google Scholar]
  4. Kiyak, H.A. Cultural and Psychologic Influences on Treatment Demand. Semin. Orthod. 2000, 6, 242–248. [Google Scholar] [CrossRef]
  5. Arian, H.; Alroudan, D.; Alkandari, Q.; Shuaib, A. Cosmetic Surgery and the Diversity of Cultural and Ethnic Perceptions of Facial, Breast, and Gluteal Aesthetics in Women: A Comprehensive Review. Clin. Cosmet. Investig. Dermatol. 2023, 16, 1443–1456. [Google Scholar] [CrossRef] [PubMed]
  6. Edler, R.J. Background Considerations to Facial Aesthetics. J. Orthod. 2001, 28, 159–168. [Google Scholar] [CrossRef] [PubMed]
  7. Bos, A.; Hoogstraten, J.; Prahl-Andersen, B. Expectations of Treatment and Satisfaction with Dentofacial Appearance in Orthodontic Patients. Am. J. Orthod. Dentofac. Orthop. 2003, 123, 127–132. [Google Scholar] [CrossRef]
  8. Yao, J.; Li, D.-D.; Yang, Y.-Q.; McGrath, C.P.J.; Mattheos, N. What Are Patients’ Expectations of Orthodontic Treatment: A Systematic Review. BMC Oral Health 2016, 16, 19. [Google Scholar] [CrossRef]
  9. Hiemstra, R.; Bos, A.; Hoogstraten, J. Patients’ and Parents’ Expectations of Orthodontic Treatment. J. Orthod. 2009, 36, 219–228. [Google Scholar] [CrossRef]
  10. Damle, S.G. Creativity Is Intelligence. Contemp. Clin. Dent. 2015, 6, 441–442. [Google Scholar] [CrossRef]
  11. John Ray Quotes. Available online: https://www.brainyquote.com/quotes/john_ray_119945 (accessed on 3 March 2024).
  12. Patusco, V.; Carvalho, C.K.; Lenza, M.A.; Faber, J. Smile Prevails over Other Facial Components of Male Facial Esthetics. J. Am. Dent. Assoc. 2018, 149, 680–687. [Google Scholar] [CrossRef]
  13. Sarver, D.M. Dentofacial Esthetics: From Macro to Micro, 1st ed.; Quintessence Publishing Co, Inc.: Batavia, IL, USA, 2020; ISBN 978-1-64724-025-7. [Google Scholar]
  14. Trulsson, U.; Strandmark, M.; Mohlin, B.; Berggren, U. A Qualitative Study of Teenagers’ Decisions to Undergo Orthodontic Treatment with Fixed Appliance. J. Orthod. 2002, 29, 197–204; discussion 195. [Google Scholar] [CrossRef] [PubMed]
  15. Berneburg, M.; Dietz, K.; Niederle, C.; Göz, G. Changes in Esthetic Standards since 1940. Am. J. Orthod. Dentofacial Orthop. 2010, 137, e1–e9; discussion 450–451. [Google Scholar] [CrossRef] [PubMed]
  16. Sadrhaghighi, A.H.; Zarghami, A.; Sadrhaghighi, S.; Mohammadi, A.; Eskandarinezhad, M. Esthetic Preferences of Laypersons of Different Cultures and Races with Regard to Smile Attractiveness. Indian J. Dent. Res. 2017, 28, 156–161. [Google Scholar] [CrossRef] [PubMed]
  17. Uribe, F.A.; Chandhoke, T.K.; Nanda, R. Chapter 1-Individualized Orthodontic Diagnosis. In Esthetics and Biomechanics in Orthodontics, 2nd ed.; Nanda, R., Ed.; W.B. Saunders: St. Louis, MO, USA, 2015; pp. 1–32. ISBN 978-1-4557-5085-6. [Google Scholar]
  18. Svedström-Oristo, A.-L.; Pietilä, T.; Pietilä, I.; Alanen, P.; Varrela, J. Morphological, Functional and Aesthetic Criteria of Acceptable Mature Occlusion. Eur. J. Orthod. 2001, 23, 373–381. [Google Scholar] [CrossRef]
  19. Kasrovi, P.M.; Meyer, M.; Nelson, G.D. Occlusion: An Orthodontic Perspective. J. Calif. Dent. Assoc. 2000, 28, 780–788. [Google Scholar] [CrossRef]
  20. Andrews, L.F. The Six Keys to Normal Occlusion. Am. J. Orthod. 1972, 62, 296–309. [Google Scholar] [CrossRef]
  21. Naini, F.B. Facial Aesthetics: Concepts & Clinical Diagnosis; Wiley-Blackwell: Hoboken, NJ, USA, 2013; p. 434. ISBN 978-1-4051-8192-1. [Google Scholar]
  22. Chhibber, A.; Upadhyay, M.; Nanda, R. Chapter 13-Class II Correction with an Intermaxillary Fixed Noncompliance Device: Twin Force Bite Corrector. In Esthetics and Biomechanics in Orthodontics, 2nd ed.; Nanda, R., Ed.; W.B. Saunders: St. Louis, MO, USA, 2015; pp. 217–245. ISBN 978-1-4557-5085-6. [Google Scholar]
  23. Naini, F.B.; Moss, J.P.; Gill, D.S. The Enigma of Facial Beauty: Esthetics, Proportions, Deformity, and Controversy. Am. J. Orthod. Dentofac. Orthop. 2006, 130, 277–282. [Google Scholar] [CrossRef]
  24. Pallett, P.M.; Link, S.; Lee, K. New “Golden” Ratios for Facial Beauty. Vis. Res. 2010, 50, 149–154. [Google Scholar] [CrossRef]
  25. Ren, H.; Chen, X.; Zhang, Y. Correlation between Facial Attractiveness and Facial Components Assessed by Laypersons and Orthodontists. J. Dent. Sci. 2021, 16, 431–436. [Google Scholar] [CrossRef]
  26. Chrapla, P.; Paradowska-Stolarz, A.; Skoskiewicz-Malinowska, K. Subjective and Objective Evaluation of the Symmetry of Maxillary Incisors among Residents of Southwest Poland. Symmetry 2022, 14, 1257. [Google Scholar] [CrossRef]
  27. Ursano, A.M.; Sonnenberg, S.M.; Lapid, M.I.; Ursano, R.J. The Physician–Patient Relationship. In Tasman’s Psychiatry; Tasman, A., Riba, M.B., Alarcón, R.D., Alfonso, C.A., Kanba, S., Ndetei, D.M., Ng, C.H., Schulze, T.G., Lecic-Tosevski, D., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–28. ISBN 978-3-030-42825-9. [Google Scholar]
  28. Proffit, W.R. The Soft Tissue Paradigm in Orthodontic Diagnosis and Treatment Planning: A New View for a New Century. J. Esthet. Dent. 2000, 12, 46–49. [Google Scholar] [CrossRef]
  29. Tomášik, J.; Zsoldos, M.; Oravcová, Ľ.; Lifková, M.; Pavleová, G.; Strunga, M.; Thurzo, A. AI and Face-Driven Orthodontics: A Scoping Review of Digital Advances in Diagnosis and Treatment Planning. AI 2024, 5, 158–176. [Google Scholar] [CrossRef]
  30. Subramanian, A.K.; Chen, Y.; Almalki, A.; Sivamurthy, G.; Kafle, D. Cephalometric Analysis in Orthodontics Using Artificial Intelligence—A Comprehensive Review. BioMed Res. Int. 2022, 2022, 1880113. [Google Scholar] [CrossRef] [PubMed]
  31. Rauniyar, S.; Jena, S.; Sahoo, N.; Mohanty, P.; Dash, B.P. Artificial Intelligence and Machine Learning for Automated Cephalometric Landmark Identification: A Meta-Analysis Previewed by a Systematic Review. Cureus 2023, 15, e40934. [Google Scholar] [CrossRef] [PubMed]
  32. Serafin, M.; Baldini, B.; Cabitza, F.; Carrafiello, G.; Baselli, G.; Del Fabbro, M.; Sforza, C.; Caprioglio, A.; Tartaglia, G.M. Accuracy of Automated 3D Cephalometric Landmarks by Deep Learning Algorithms: Systematic Review and Meta-Analysis. Radiol. Medica 2023, 128, 544–555. [Google Scholar] [CrossRef]
  33. Thurzo, A.; Urbanová, W.; Novák, B.; Czako, L.; Siebert, T.; Stano, P.; Mareková, S.; Fountoulaki, G.; Kosnáčová, H.; Varga, I. Where Is the Artificial Intelligence Applied in Dentistry? Systematic Review and Literature Analysis. Healthcare 2022, 10, 1269. [Google Scholar] [CrossRef]
  34. Mohaideen, K.; Negi, A.; Verma, D.K.; Kumar, N.; Sennimalai, K.; Negi, A. Applications of Artificial Intelligence and Machine Learning in Orthognathic Surgery: A Scoping Review. J. Stomatol. Oral Maxillofac. Surg. 2022, 123, e962–e972. [Google Scholar] [CrossRef]
  35. Zhu, J.; Yang, Y.; Wong, H.M. Development and Accuracy of Artificial Intelligence-Generated Prediction of Facial Changes in Orthodontic Treatment: A Scoping Review. J. Zhejiang Univ. Sci. B 2023, 24, 974–984. [Google Scholar] [CrossRef]
  36. Alqerban, A.; Alaskar, A.; Alnatheer, M.; Samran, A.; Alqhtani, N.; Koppolu, P. Differences in Hard and Soft Tissue Profile after Orthodontic Treatment with and without Extraction. Niger. J. Clin. Pract. 2022, 25, 325–335. [Google Scholar] [CrossRef]
  37. Zhou, Q.; Gao, J.; Guo, D.; Zhang, H.; Zhang, X.; Qin, W.; Jin, Z. Three Dimensional Quantitative Study of Soft Tissue Changes in Nasolabial Folds after Orthodontic Treatment in Female Adults. BMC Oral Health 2023, 23, 31. [Google Scholar] [CrossRef]
  38. Park, Y.S.; Choi, J.H.; Kim, Y.; Choi, S.H.; Lee, J.H.; Kim, K.H.; Chung, C.J. Deep Learning-Based Prediction of the 3D Postorthodontic Facial Changes. J. Dent. Res. 2022, 101, 1372–1379. [Google Scholar] [CrossRef]
  39. Ryu, J.; Lee, Y.-S.; Mo, S.-P.; Lim, K.; Jung, S.-K.; Kim, T.-W. Application of Deep Learning Artificial Intelligence Technique to the Classification of Clinical Orthodontic Photos. BMC Oral Health 2022, 22, 454. [Google Scholar] [CrossRef] [PubMed]
  40. Li, S.; Guo, Z.; Lin, J.; Ying, S. Artificial Intelligence for Classifying and Archiving Orthodontic Images. BioMed Res. Int. 2022, 2022, 1473977. [Google Scholar] [CrossRef]
  41. Volovic, J.; Badirli, S.; Ahmad, S.; Leavitt, L.; Mason, T.; Bhamidipalli, S.S.; Eckert, G.; Albright, D.; Turkkahraman, H. A Novel Machine Learning Model for Predicting Orthodontic Treatment Duration. Diagnostics 2023, 13, 2740. [Google Scholar] [CrossRef] [PubMed]
  42. Patcas, R.; Bornstein, M.M.; Schätzle, M.A.; Timofte, R. Artificial Intelligence in Medico-Dental Diagnostics of the Face: A Narrative Review of Opportunities and Challenges. Clin. Oral Investig. 2022, 26, 6871–6879. [Google Scholar] [CrossRef]
  43. Ruz, G.A.; Araya-Díaz, P.; Henríquez, P.A. Facial Biotype Classification for Orthodontic Treatment Planning Using an Alternative Learning Algorithm for Tree Augmented Naive Bayes. BMC Med. Inform. Decis. Mak. 2022, 22, 316. [Google Scholar] [CrossRef]
  44. Mladenovic, R.; Kalevski, K.; Davidovic, B.; Jankovic, S.; Todorovic, V.S.; Vasovic, M. The Role of Artificial Intelligence in the Accurate Diagnosis and Treatment Planning of Non-Syndromic Supernumerary Teeth: A Case Report in a Six-Year-Old Boy. Children 2023, 10, 839. [Google Scholar] [CrossRef] [PubMed]
  45. Ryu, J.; Kim, Y.-H.; Kim, T.-W.; Jung, S.-K. Evaluation of Artificial Intelligence Model for Crowding Categorization and Extraction Diagnosis Using Intraoral Photographs. Sci. Rep. 2023, 13, 5177. [Google Scholar] [CrossRef] [PubMed]
  46. Jha, N.; Lee, K.-S.; Kim, Y.-J. Diagnosis of Temporomandibular Disorders Using Artificial Intelligence Technologies: A Systematic Review and Meta-Analysis. PLoS ONE 2022, 17, e0272715. [Google Scholar] [CrossRef]
  47. Almășan, O.; Leucuța, D.-C.; Hedeșiu, M.; Mureșanu, S.; Popa, Ș.L. Temporomandibular Joint Osteoarthritis Diagnosis Employing Artificial Intelligence: Systematic Review and Meta-Analysis. J. Clin. Med. 2023, 12, 942. [Google Scholar] [CrossRef]
  48. Xu, L.; Chen, J.; Qiu, K.; Yang, F.; Wu, W. Artificial Intelligence for Detecting Temporomandibular Joint Osteoarthritis Using Radiographic Image Data: A Systematic Review and Meta-Analysis of Diagnostic Test Accuracy. PLoS ONE 2023, 18, e0288631. [Google Scholar] [CrossRef] [PubMed]
  49. Moon, J.-H.; Shin, H.-K.; Lee, J.-M.; Cho, S.J.; Park, J.-A.; Donatelli, R.E.; Lee, S.-J. Comparison of Individualized Facial Growth Prediction Models Based on the Partial Least Squares and Artificial Intelligence. Angle Orthod. 2024, 94, 207–215. [Google Scholar] [CrossRef] [PubMed]
  50. Jeong, Y.; Nang, Y.; Zhao, Z. Automated Evaluation of Upper Airway Obstruction Based on Deep Learning. BioMed Res. Int. 2023, 2023, 8231425. [Google Scholar] [CrossRef] [PubMed]
  51. Tsolakis, I.A.; Kolokitha, O.-E.; Papadopoulou, E.; Tsolakis, A.I.; Kilipiris, E.G.; Palomo, J.M. Artificial Intelligence as an Aid in CBCT Airway Analysis: A Systematic Review. Life 2022, 12, 1894. [Google Scholar] [CrossRef]
  52. Fountoulaki, G.; Thurzo, A. Change in the Constricted Airway in Patients after Clear Aligner Treatment: A Retrospective Study. Diagnostics 2022, 12, 2201. [Google Scholar] [CrossRef]
  53. Amen, B. Sketch of Big Data Real-Time Analytics Model. In Proceedings of the Fifth International Conference on Advances in Information Mining and Management, Brussels, Belgium, 21–26 June 2015. [Google Scholar]
  54. Gulum, M.A.; Trombley, C.M.; Ozen, M.; Esen, E.; Aksamoglu, M.; Kantardzic, M. Why Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists? Appl. Sci. 2024, 14, 4654. [Google Scholar] [CrossRef]
  55. Thakur, G.S.; Sahu, S.K.; Swamy, N.K.; Gupta, M.; Jan, T.; Prasad, M. Review of Soft Computing Techniques in Monitoring Cardiovascular Disease in the Context of South Asian Countries. Appl. Sci. 2023, 13, 9555. [Google Scholar] [CrossRef]
  56. Xiao, Q.; Lee, K.; Mokhtar, S.A.; Ismail, I.; Pauzi, A.L.b.M.; Zhang, Q.; Lim, P.Y. Deep Learning-Based ECG Arrhythmia Classification: A Systematic Review. Appl. Sci. 2023, 13, 4964. [Google Scholar] [CrossRef]
  57. Mou, L.; Qi, H.; Liu, Y.; Zheng, Y.; Matthew, P.; Su, P.; Liu, J.; Zhang, J.; Zhao, Y. DeepGrading: Deep Learning Grading of Corneal Nerve Tortuosity. IEEE Trans. Med. Imaging 2022, 41, 2079–2091. [Google Scholar] [CrossRef]
  58. Chen, T.; Tachmazidis, I.; Batsakis, S.; Adamou, M.; Papadakis, E.; Antoniou, G. Diagnosing Attention-Deficit Hyperactivity Disorder (ADHD) Using Artificial Intelligence: A Clinical Study in the UK. Front. Psychiatry 2023, 14, 1164433. [Google Scholar] [CrossRef]
  59. Cheng, G.; Li, Z.; Yao, X.; Li, K.; Wei, Z. Remote Sensing Image Scene Classification Using Bag of Convolutional Features. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1735–1739. [Google Scholar] [CrossRef]
  60. Surovková, J.; Haluzová, S.; Strunga, M.; Urban, R.; Lifková, M.; Thurzo, A. The New Role of the Dental Assistant and Nurse in the Age of Advanced Artificial Intelligence in Telehealth Orthodontic Care with Dental Monitoring: Preliminary Report. Appl. Sci. 2023, 13, 5212. [Google Scholar] [CrossRef]
  61. Cellerino, A. Psychobiology of Facial Attractiveness. J. Endocrinol. Investig. 2003, 26, 45–48. [Google Scholar]
  62. Mogilski, J.K.; Welling, L.L.M. The Relative Contribution of Jawbone and Cheekbone Prominence, Eyebrow Thickness, Eye Size, and Face Length to Evaluations of Facial Masculinity and Attractiveness: A Conjoint Data-Driven Approach. Front. Psychol. 2018, 9, 2428. [Google Scholar] [CrossRef] [PubMed]
  63. Thornhill, R.; Gangestad, S.W. Facial Attractiveness. Trends Cogn. Sci. 1999, 3, 452–460. [Google Scholar] [CrossRef]
  64. Przylipiak, M.; Przylipiak, J.; Terlikowski, R.; Lubowicka, E.; Chrostek, L.; Przylipiak, A. Impact of Face Proportions on Face Attractiveness. J. Cosmet. Dermatol. 2018, 17, 954–959. [Google Scholar] [CrossRef]
  65. Shen, H.; Chau, D.K.P.; Su, J.; Zeng, L.-L.; Jiang, W.; He, J.; Fan, J.; Hu, D. Brain Responses to Facial Attractiveness Induced by Facial Proportions: Evidence from an fMRI Study. Sci. Rep. 2016, 6, 35905. [Google Scholar] [CrossRef]
  66. Iwasokun, G. Image Enhancement Methods: A Review. Br. J. Math. Comput. Sci. 2014, 4, 2251–2277. [Google Scholar] [CrossRef]
  67. Gemini–Chat to Supercharge Your Ideas. Available online: https://gemini.google.com (accessed on 5 March 2024).
  68. Top 10 Best AI Face Apps Review 2024. Available online: https://topten.ai/face-apps-review/ (accessed on 6 March 2024).
  69. Sha, A. 8 Best AI Photo Enhancers in 2024 (Free and Paid). Available online: https://beebom.com/best-ai-photo-enhancers/ (accessed on 6 March 2024).
  70. Wirth, S. InterFace Experiments: FaceApp as Everyday AI. Interface Crit. 2023, 4, 159–169. [Google Scholar] [CrossRef]
  71. Sudmann, A. (Ed.) The Democratization of Artificial Intelligence: Net Politics in the Era of Learning Algorithms; transcript: Bielefeld, Germany, 2019; pp. 9–32. ISBN 978-3-8394-4719-2. [Google Scholar]
  72. Pasquinelli, M.; Joler, V. The Nooscope Manifested: AI as Instrument of Knowledge Extractivism. AI Soc. 2021, 36, 1263–1280. [Google Scholar] [CrossRef]
  73. Offert, F.; Bell, P. Perceptual Bias and Technical Metapictures: Critical Machine Vision as a Humanities Challenge. AI Soc. 2021, 36, 1133–1144. [Google Scholar] [CrossRef]
  74. Orhan, K.; Jagtap, R. (Eds.) Artificial Intelligence in Dentistry; Springer International Publishing: Cham, Switzerland, 2023; ISBN 978-3-031-43826-4. [Google Scholar]
  75. Generated Photos|Unique, Worry-Free Model Photos. Available online: https://generated.photos (accessed on 24 August 2024).
  76. FaceApp: Face Editor. Available online: https://www.faceapp.com/ (accessed on 24 August 2024).
  77. Cox, D.R. Karl Pearson and the Chisquared Test. In Goodness of Fit Tests and Model Validity; Birkhäuser: Boston, MA, USA, 2000; pp. 3–8. [Google Scholar]
  78. Wickham, W. Ggplot2: Elegant Graphics for Data Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
  79. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  80. Fleiss, J.L. Measuring Nominal Scale Agreement among Many Raters. Psychol. Bull. 1971, 76, 378–382. [Google Scholar] [CrossRef]
  81. Gamer, M.; Lemon, J.; Fellows, I.; Singh, P. Irr: Various Coefficients of Interrater Reliability and Agreement; Version 0.84.1. Available online: https://CRAN.R-project.org/package=irr (accessed on 1 July 2024).
  82. Freiman, J.A.; Chalmers, T.C.; Smith, H.; Kuebler, R.R. The Importance of Beta, the Type II Error and Sample Size in the Design and Interpretation of the Randomized Control Trial. N. Engl. J. Med. 1978, 299, 690–694. [Google Scholar] [CrossRef]
  83. GIMP. Available online: https://www.gimp.org/ (accessed on 24 August 2024).
  84. GPT-4. Available online: https://openai.com/index/gpt-4/ (accessed on 24 August 2024).
  85. Goffman, E. The Presentation of Self in Everyday Life; Doubleday: Oxford, UK, 1959. [Google Scholar]
  86. Ozimek, P.; Lainas, S.; Bierhoff, H.-W.; Rohmann, E. How Photo Editing in Social Media Shapes Self-Perceived Attractiveness and Self-Esteem via Self-Objectification and Physical Appearance Comparisons. BMC Psychol. 2023, 11, 99. [Google Scholar] [CrossRef]
  87. Eisenthal, Y.; Dror, G.; Ruppin, E. Facial Attractiveness: Beauty and the Machine. Neural Comput. 2006, 18, 119–142. [Google Scholar] [CrossRef] [PubMed]
  88. Kagian, A.; Dror, G.; Leyvand, T.; Meilijson, I.; Cohen-Or, D.; Ruppin, E. A Machine Learning Predictor of Facial Attractiveness Revealing Human-like Psychophysical Biases. Vis. Res. 2008, 48, 235–243. [Google Scholar] [CrossRef]
  89. Gründl, M. Determinanten Physischer Attraktivität–Der Einfluss von Durchschnittlichkeit, Symmetrie Und Sexuellem Dimorphismus Auf Die Attraktivität von Gesichtern. Ph.D. Thesis, Universität Regensburg, Regensburg, Germany, 2013. [Google Scholar]
  90. Varlik, S.K.; Demirbaş, E.; Orhan, M. Influence of Lower Facial Height Changes on Frontal Facial Attractiveness and Perception of Treatment Need by Lay People. Angle Orthod. 2010, 80, 1159–1164. [Google Scholar] [CrossRef] [PubMed]
  91. Krishna Veni, S.; Elsayed, M.; Singh, I.S.; Nayan, K.; Varma, P.K.; Naik, M.K. Changes in Soft Tissue Variable of Lips Following Retraction of Anterioir Teeth-A Cephalometric Study. J. Pharm. Bioallied Sci. 2023, 15, S248–S251. [Google Scholar] [CrossRef] [PubMed]
  92. Dibot, N.M.; Tieo, S.; Mendelson, T.C.; Puech, W.; Renoult, J.P. Sparsity in an Artificial Neural Network Predicts Beauty: Towards a Model of Processing-Based Aesthetics. PLoS Comput. Biol. 2023, 19, e1011703. [Google Scholar] [CrossRef]
  93. Bichu, Y.M.; Hansa, I.; Bichu, A.Y.; Premjani, P.; Flores-Mir, C.; Vaid, N.R. Applications of Artificial Intelligence and Machine Learning in Orthodontics: A Scoping Review. Prog. Orthod. 2021, 22, 18. [Google Scholar] [CrossRef]
  94. Gao, J.; Wang, X.; Qin, Z.; Zhang, H.; Guo, D.; Xu, Y.; Jin, Z. Profiles of Facial Soft Tissue Changes during and after Orthodontic Treatment in Female Adults. BMC Oral Health 2022, 22, 257. [Google Scholar] [CrossRef] [PubMed]
  95. Čartolovni, A.; Tomičić, A.; Lazić Mosler, E. Ethical, Legal, and Social Considerations of AI-Based Medical Decision-Support Tools: A Scoping Review. Int. J. Med. Inform. 2022, 161, 104738. [Google Scholar] [CrossRef]
  96. Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V.; Gibson, J. Artificial Intelligence for Good Health: A Scoping Review of the Ethics Literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef] [PubMed]
  97. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The Ethics of AI in Health Care: A Mapping Review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
  98. Keskinbora, K.H. Medical Ethics Considerations on Artificial Intelligence. J. Clin. Neurosci. 2019, 64, 277–282. [Google Scholar] [CrossRef]
  99. Hagendorff, T. The Ethics of AI Ethics—An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  100. Mörch, C.M.; Atsu, S.; Cai, W.; Li, X.; Madathil, S.A.; Liu, X.; Mai, V.; Tamimi, F.; Dilhac, M.A.; Ducret, M. Artificial Intelligence and Ethics in Dentistry: A Scoping Review. J. Dent. Res. 2021, 100, 1452–1460. [Google Scholar] [CrossRef]
  101. Favaretto, M.; Shaw, D.; De Clercq, E.; Joda, T.; Elger, B.S. Big Data and Digitalization in Dentistry: A Systematic Review of the Ethical Issues. Int. J. Environ. Res. Public Health 2020, 17, 2495. [Google Scholar] [CrossRef] [PubMed]
  102. Kazimierczak, N.; Kazimierczak, W.; Serafin, Z.; Nowicki, P.; Nożewski, J.; Janiszewska-Olszowska, J. AI in Orthodontics: Revolutionizing Diagnostics and Treatment Planning—A Comprehensive Review. J. Clin. Med. 2024, 13, 344. [Google Scholar] [CrossRef]
  103. Hulsen, T. Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare. AI 2023, 4, 652–666. [Google Scholar] [CrossRef]
  104. Bispo, J.P. Social Desirability Bias in Qualitative Health Research. Rev. Saude Publica 2022, 56, 101. [Google Scholar] [CrossRef]
  105. Mazor, K.M.; Clauser, B.E.; Field, T.; Yood, R.A.; Gurwitz, J.H. A Demonstration of the Impact of Response Bias on the Results of Patient Satisfaction Surveys. Health Serv. Res. 2002, 37, 1403–1417. [Google Scholar] [CrossRef] [PubMed]
  106. Miller, E.J.; Steward, B.A.; Witkower, Z.; Sutherland, C.A.M.; Krumhuber, E.G.; Dawel, A. AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones. Psychol. Sci. 2023, 34, 1390–1403. [Google Scholar] [CrossRef]
  107. Niazi, S.K.; Mariam, Z. Computer-Aided Drug Design and Drug Discovery: A Prospective Analysis. Pharmaceuticals 2024, 17, 22. [Google Scholar] [CrossRef]
  108. Sabe, V.T.; Ntombela, T.; Jhamba, L.A.; Maguire, G.E.M.; Govender, T.; Naicker, T.; Kruger, H.G. Current Trends in Computer Aided Drug Design and a Highlight of Drugs Discovered via Computational Techniques: A Review. Eur. J. Med. Chem. 2021, 224, 113705. [Google Scholar] [CrossRef]
  109. Sadybekov, A.V.; Katritch, V. Computational Approaches Streamlining Drug Discovery. Nature 2023, 616, 673–685. [Google Scholar] [CrossRef] [PubMed]
  110. Urbina, F.; Lentzos, F.; Invernizzi, C.; Ekins, S. Dual Use of Artificial-Intelligence-Powered Drug Discovery. Nat. Mach. Intell. 2022, 4, 189–191. [Google Scholar] [CrossRef]
  111. Khalid, N.; Qayyum, A.; Bilal, M.; Al-Fuqaha, A.; Qadir, J. Privacy-Preserving Artificial Intelligence in Healthcare: Techniques and Applications. Comput. Biol. Med. 2023, 158, 106848. [Google Scholar] [CrossRef]
  112. Pascadopoli, M.; Zampetti, P.; Nardi, M.G.; Pellegrini, M.; Scribante, A. Smartphone Applications in Dentistry: A Scoping Review. Dent. J. 2023, 11, 243. [Google Scholar] [CrossRef]
  113. Lee, J.; Bae, S.-R.; Noh, H.-K. Commercial Artificial Intelligence Lateral Cephalometric Analysis: Part 1-the Possibility of Replacing Manual Landmarking with Artificial Intelligence Service. J. Clin. Pediatr. Dent. 2023, 47, 106–118. [Google Scholar] [CrossRef]
  114. Little, A.C.; Jones, B.C.; DeBruine, L.M. Facial Attractiveness: Evolutionary Based Research. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2011, 366, 1638–1659. [Google Scholar] [CrossRef] [PubMed]
  115. Muñoz-Reyes, J.A.; Iglesias-Julios, M.; Pita, M.; Turiegano, E. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive. PLoS ONE 2015, 10, e0132979. [Google Scholar] [CrossRef] [PubMed]
  116. Zheng, R.; Ren, D.; Xie, C.; Pan, J.; Zhou, G. Normality Mediates the Effect of Symmetry on Facial Attractiveness. Acta Psychol. 2021, 217, 103311. [Google Scholar] [CrossRef] [PubMed]
  117. He, D.; Gu, Y.; Sun, Y. Correlations between Objective Measurements and Subjective Evaluations of Facial Profile after Orthodontic Treatment. J. Int. Med. Res. 2020, 48, 0300060520936854. [Google Scholar] [CrossRef]
  118. Putrino, A.; Abed, M.R.; Barbato, E.; Galluccio, G. A Current Tool in Facial Aesthetics Perception of Orthodontic Patient: The Digital Warping. Dental. Cadmos. 2021, 89, 46–52. [Google Scholar] [CrossRef]
  119. Thurzo, A.; Stanko, P.; Urbanova, W.; Lysy, J.; Suchancova, B.; Makovnik, M.; Javorka, V. The WEB 2.0 Induced Paradigm Shift in the e-Learning and the Role of Crowdsourcing in Dental Education. Bratisl. Med. J. 2010, 111, 168–175. [Google Scholar]
  120. Birhane, A. The Unseen Black Faces of AI Algorithms. Nature 2022, 610, 451–452. [Google Scholar] [CrossRef]
  121. Duran, G.S.; Gökmen, Ş.; Topsakal, K.G.; Görgülü, S. Evaluation of the Accuracy of Fully Automatic Cephalometric Analysis Software with Artificial Intelligence Algorithm. Orthod. Craniofac. Res. 2023, 26, 481–490. [Google Scholar] [CrossRef]
  122. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI; Berkman Klein Center Research Publication: Cambridge, MA, USA, 2020. [Google Scholar]
  123. Floridi, L. Establishing the Rules for Building Trustworthy AI. Nat. Mach. Intell. 2019, 1, 261–262. [Google Scholar] [CrossRef]
Figure 1. Respondents to the “Male faces” questionnaire by gender.
Figure 1. Respondents to the “Male faces” questionnaire by gender.
Applsci 14 07837 g001
Figure 2. Respondents to the “Male faces” questionnaire by age.
Figure 2. Respondents to the “Male faces” questionnaire by age.
Applsci 14 07837 g002
Figure 3. Respondents to the “Female faces” questionnaire by gender.
Figure 3. Respondents to the “Female faces” questionnaire by gender.
Applsci 14 07837 g003
Figure 4. Respondents to the “Female faces” questionnaire by age.
Figure 4. Respondents to the “Female faces” questionnaire by age.
Applsci 14 07837 g004
Figure 5. Left—original, right—FaceApp-enhanced. This AI-enhanced picture (Male face 19) was voted as the more attractive one of the pair by the highest percentage of respondents (90.57%).
Figure 5. Left—original, right—FaceApp-enhanced. This AI-enhanced picture (Male face 19) was voted as the more attractive one of the pair by the highest percentage of respondents (90.57%).
Applsci 14 07837 g005
Figure 6. The difference between the attractiveness score (from 1 to 10, 10 being the most attractive) between the AI-enhanced and original version of the male picture, of which the latter one was judged most frequently as the more attractive one (Male face 19).
Figure 6. The difference between the attractiveness score (from 1 to 10, 10 being the most attractive) between the AI-enhanced and original version of the male picture, of which the latter one was judged most frequently as the more attractive one (Male face 19).
Applsci 14 07837 g006
Figure 7. Left—original, right—FaceApp-enhanced. This AI-enhanced picture (Female face 25) was selected as the more attractive one of the pair by the highest percentage of respondents (92.91%).
Figure 7. Left—original, right—FaceApp-enhanced. This AI-enhanced picture (Female face 25) was selected as the more attractive one of the pair by the highest percentage of respondents (92.91%).
Applsci 14 07837 g007
Figure 8. The difference between the attractiveness score (from 1 to 10, 10 being the most attractive) between the AI-enhanced and original version of the female picture, of which the latter one was judged most frequently as the more attractive one (Female face 25).
Figure 8. The difference between the attractiveness score (from 1 to 10, 10 being the most attractive) between the AI-enhanced and original version of the female picture, of which the latter one was judged most frequently as the more attractive one (Female face 25).
Applsci 14 07837 g008
Figure 9. The changes in selected anthropometric distances on the male face, resulting from AI enhancement of the pictures.
Figure 9. The changes in selected anthropometric distances on the male face, resulting from AI enhancement of the pictures.
Applsci 14 07837 g009
Figure 10. The changes in selected anthropometric distances on the female face, resulting from AI enhancement of the pictures.
Figure 10. The changes in selected anthropometric distances on the female face, resulting from AI enhancement of the pictures.
Applsci 14 07837 g010
Figure 11. Percentual changes in measured distances (from highest to lowest overall value).
Figure 11. Percentual changes in measured distances (from highest to lowest overall value).
Applsci 14 07837 g011
Table 1. The facial anthropometric points selected to compare the pairs of pictures.
Table 1. The facial anthropometric points selected to compare the pairs of pictures.
AbbreviationCraniometric PointsSignificance
Ch-ChCh—cheilionMouth width
Go-GoGo—gonionMandible width
Zy-ZyZy—zygionMiddle face width
Tri-MeTri—trichion, Me—mentonFace height
Sl-MeSl—sublabiale, Me—mentonChin height
Sn-MeSn—subnasale, Me—mentonThe distance from the nose to chin
N-MeN—nasion, Me—mentonAnterior face height
Al-AlAl—alareNose width
Ft-FtFt—frontotemporaleThe distance of the most outer points of superciliary arches on the frontozygomatic suture
Fz-FzFz—frontozygomaticusThe distance of the most outer
points of superciliary arches
Ps-PiPs/i—palpebrale superius/inferiusEye height
Ect-EctEct—exocanthionEye width
Li-CphLi—labrale inferius,
Cph—crista philtri
The distance between the upper and lower lip lines
Prn-MePrn—pronasale, Me—mentonThe distance between the nose tip and chin
Table 2. The percentage of respondents selecting the given picture (male and female face, original and AI-enhanced) as more attractive from the pair.
Table 2. The percentage of respondents selecting the given picture (male and female face, original and AI-enhanced) as more attractive from the pair.
OriginalFaceAppp-Value OriginalFaceAppp-Value
Male face 130.8269.180.000013Female face 113.4886.52<0.000001
Male face 236.4863.520.000649Female face 251.0648.940.7209
Male face 324.5375.47<0.000001Female face 326.6073.40<0.000001
Male face 415.7284.28<0.000001Female face 412.0687.94<0.000001
Male face 528.3071.70<0.000001Female face 536.1763.830.000003
Male face 615.7284.28<0.000001Female face 614.8985.11<0.000001
Male face 713.2186.79<0.000001Female face 726.9573.05<0.000001
Male face 815.7284.28<0.000001Female face 834.0465.96<0.000001
Male face 919.5080.50<0.000001Female face 99.9390.07<0.000001
Male face 1015.7284.28<0.000001Female face 1017.3882.62<0.000001
Male face 1122.6477.36<0.000001Female face 1124.1175.89<0.000001
Male face 1223.2776.73<0.000001Female face 1226.6073.40<0.000001
Male face 1339.6260.380.008869Female face 1320.2179.79<0.000001
Male face 1418.8781.13<0.000001Female face 1420.2179.79<0.000001
Male face 1522.6477.36<0.000001Female face 1519.5080.50<0.000001
Male face 1617.6182.39<0.000001Female face 1622.7077.30<0.000001
Male face 1729.5670.44<0.000001Female face 1743.2656.740.02364
Male face 1835.8564.150.000359Female face 1812.7787.23<0.000001
Male face 199.4390.57<0.000001Female face 1923.7676.24<0.000001
Male face 2011.9588.05<0.000001Female face 2030.8569.15<0.000001
Male face 2111.9588.05<0.000001Female face 2119.8680.14<0.000001
Male face 2216.3583.65<0.000001Female face 2239.7260.280.000553
Male face 2326.4273.58<0.000001Female face 2322.3477.66<0.000001
Male face 2410.6989.31<0.000001Female face 249.9390.07<0.000001
Male face 2533.9666.040.000052Female face 257.0992.91<0.000001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tomášik, J.; Zsoldos, M.; Majdáková, K.; Fleischmann, A.; Oravcová, Ľ.; Sónak Ballová, D.; Thurzo, A. The Potential of AI-Powered Face Enhancement Technologies in Face-Driven Orthodontic Treatment Planning. Appl. Sci. 2024, 14, 7837. https://doi.org/10.3390/app14177837

AMA Style

Tomášik J, Zsoldos M, Majdáková K, Fleischmann A, Oravcová Ľ, Sónak Ballová D, Thurzo A. The Potential of AI-Powered Face Enhancement Technologies in Face-Driven Orthodontic Treatment Planning. Applied Sciences. 2024; 14(17):7837. https://doi.org/10.3390/app14177837

Chicago/Turabian Style

Tomášik, Juraj, Márton Zsoldos, Kristína Majdáková, Alexander Fleischmann, Ľubica Oravcová, Dominika Sónak Ballová, and Andrej Thurzo. 2024. "The Potential of AI-Powered Face Enhancement Technologies in Face-Driven Orthodontic Treatment Planning" Applied Sciences 14, no. 17: 7837. https://doi.org/10.3390/app14177837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop