Next Article in Journal
Nanoelectrode Ensembles Consisting of Carbon Nanotubes
Next Article in Special Issue
Cube of Space Sampling for 3D Model Retrieval
Previous Article in Journal
The Effects of a 4-Week Combined Aerobic and Resistance Training and Volleyball Training on Fitness Variables and Body Composition on STEAM Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifying Perceived Facial Asymmetry to Enhance Physician–Patient Communications

1
Craniofacial Research Center, Chang Gung Memorial Hospital, Taoyuan 333008, Taiwan
2
Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
3
Department of Information Management, Chang Gung University, Taoyuan 333323, Taiwan
4
Department of Business and Management, Ming Chi University and Technology, New Taipei City 243303, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(18), 8398; https://doi.org/10.3390/app11188398
Submission received: 17 August 2021 / Revised: 7 September 2021 / Accepted: 7 September 2021 / Published: 10 September 2021
(This article belongs to the Special Issue Artificial Intelligence in Industrial Engineering)

Abstract

:
In cosmetic surgery, bridging the anticipation gap between the patients and the physicians can be challenging if there lacks objective and transparent information exchange during the decision-making and surgical process. Among all factors, facial symmetry is the most important for assessing facial attractiveness. The aim of this work is to promote communications between the two parties by providing a quadruple of quantitative measurements: overall asymmetry index (oAI), asymmetry vector, classification, and confidence vector, using an artificial neural network classifier to model people’s perception acquired from visual questionnaires concerning facial asymmetry. The questionnaire results exhibit a Cronbach’s Alpha value of 0.94 and categorize the respondents’ perception of each stimulus face into perceived normal (PN), perceived asymmetrically normal (PAN), and perceived abnormal (PA) categories. The trained classifier yields an overall root mean squared error < 0.01, and its result shows that the oAI is, in general, proportional to the degree of perceived asymmetry. However, there exist faces that are difficult to classify as either PN or PAN or either PAN or PA with competing confidence values. In such cases, oAI alone is not sufficient to articulate facial asymmetry. Assisting surgeon–patient conversations with the proposed asymmetry quadruple is advised to avoid or to mitigate potential medical disputes.

1. Introduction

Appearance is the most prominent stimulus to establish an impression in others [1]. Rubenstein et al. showed that humans develop their capability of perceiving attractiveness as early as in their infancy [2]. Cross-cultural and cross-age similarities exist in judging facial attractiveness [3]. Among all factors, facial symmetry is the most important factor that correlates facial attractiveness [4,5,6,7,8]. Perfect facial symmetry is only considered a theoretical existence, as most attractive individuals exhibit an asymmetric facial nature [9]. However, significant facial asymmetry can introduce aesthetic or even functional problems [10]. Cheong and Lo articulated the etiology of facial asymmetry as being congenital, developmental, or acquired; the clinical implications, evaluation, and treatment planning and management may vary accordingly [5,11].
People have grown more accepting of seeking craniofacial orthognathic, orthodontic, or plastic surgery for appearance improvements in recent years [12,13]. According to ISAPS Global Statistics, more than 11 million surgical procedures were performed in 2019, an increase of 20.6% from 2015, and there were 4,058,143 (35.7%) total face and head surgical procedures [14].
Although plastic surgery is expected to enhance one’s appearance and self-esteem, there exist consequential uncertainties due to psychological and physical discomfort for the patients. Harmonious physician–patient relationships are crucial to soothe this discomfort and to promote healthcare delivery [15]. However, anticipation gaps and insufficient decision-making communications have caused medical disputes [16,17,18,19]. According to the Department of Health, Taipei City, among Taipei’s 415 cosmetic surgery clinics in 2016, 116 of 374 medical disputes came from aesthetic medicine [20]
To bridge the anticipation gaps, quantitative measurements are needed to model perception of facial asymmetry. Related work and strategies in the below literature include: (1) determining how stimulus faces are collected—two-dimensional (2D) or three-dimensional (3D), real or simulated, with skin or without; (2) identifying facial landmarks automatically or semi-automatically, or maybe no landmarks; (3) defining a coordinate system and making measurements; (4) conducting questionnaire surveys to quantify perceptions of facial asymmetry; (5) computing the degree of asymmetry and classification.
Chu et al. employed a series of 2D images to present progressive asymmetry [21]. The results suggested that at least 3 mm of facial asymmetry on the oral commissure, brow, or both was required for the participants to notice the asymmetry. Naini et al. measured the nasolabial angle on 2D facial contours to determine whether a rhinoplasty surgery was needed [22]. The results showed that 2D stimulus facial images had their constraints. Meyer-Marcotty et al. performed a similar study on facial asymmetry, though utilizing 3D images [1]. Instead of employing images from different persons, they transformed the same image by incremental soft tissue alterations. They found that the nasal structure played a crucial role in perception of symmetry, and the asymmetry of chins greatly affected the facial appearance.
In clinical practices, asymmetry is usually measured by comparing bilateral facial features against the mid-sagittal plane. Quantitative studies of the face rely on the accurate positioning of facial landmarks or surface-based patches [23]. Masuoka et al. explored the correlation of facial asymmetry between cephalometric measurements and the frontal photograph evaluation conducted by orthodontists. The result argued that there existed a discrepancy between the two [24]. Ferrario et al. proposed quantitative metrics to assess pre- and post-operative differences [25]. Hajeer et al. compared the performance of bi-maxillary and maxillary osteotomy [26]. Pre- and post-orthognathic surgeries of 44 patients (20 Class III cases of bi-maxillary osteotomy; 12 Class III cases of maxillary osteotomy; 12 Class II cases of bi-maxillary osteotomy) were assessed. Meyer-Marcotty et al., Djordjevic et al., and Cevidanes et al. mirrored faces and aligned feature points [1,27,28]. Huang et al. proposed an asymmetry index (AI) for each individual facial landmark of interest [7]. Hsu et al. computed facial contour asymmetry instead of landmark asymmetry [29]. Lo et al. constructed a transfer learning model to score the asymmetry of facial contour maps and to assess the efficacy of an orthognathic surgery [30]. The above-mentioned studies employed the Frankfurt horizontal plane or the natural head position to derive 3D head coordinate systems.
Questionnaires were used to elicit and quantify people’s perception of facial asymmetry. Lee et al. claimed that non-experts’ assessments for facial asymmetry should be first considered because the ultimate demand and perception rested upon the patients [31]. Padwa et al. maintained similar comments [32]. Jackson et al. conducted visual questionnaires among professional orthodontists, general dentists, and laypersons. [8]. Their study showed that abilities to assess facial symmetry were different over varied professional backgrounds, and the orthodontists had the most profound capabilities of differentiating symmetrical from asymmetrical faces. Chu et al. employed playback of photographs to trigger conscious perception of the observers [21]. In the study of Naini et al., three groups of respondents, i.e., orthognathic patients, clinicians, and laypeople, were invited to assess how mandible and chin landmarks related to perceived asymmetry [22]. The result demonstrated that the perception of clinicians and patients were more critical than laypeople. McAvinchey et al. classified the participants into five groups: laypeople, dental students, dental care professionals, dental practitioners, and orthodontists [33].
To grade the degree of asymmetry, Yamamoto et al. invited oral surgeons and orthodontists to subjectively evaluate facial asymmetry and defined grade#0 as good symmetrical frontal view, grade#1 as little asymmetry, grade#2 as localized asymmetry, and grade#3 as marked asymmetry [34]. Meyer-Marcotty et al. adopted a six-point scale to rank facial symmetry (1: very symmetrical; 6: very asymmetrical) [1]. Masuoka et al. categorized the frontal facial image into two groups [24]. Patients in Group A exhibited symmetrical or little asymmetrical frontal views, without having to undergo a surgical treatment. Patients in Group B exhibited marked asymmetry and a surgical treatment was required. McAvinchey et al. requested the observers to classify the facial images displayed on the screen into normal, slightly abnormal but socially acceptable, or abnormal [33].
The building blocks to achieve the aim of this work are threefold. (1) Define a 3D facial coordinate system regardless of the head direction and compute asymmetry index vector ( AI ) for individual facial features and overall asymmetry index ( oAI ). (2) Build an artificial neural network classifier to model perceived asymmetry acquired from the questionnaires and classify a face as perceived normal ( PN ), perceived asymmetrically normal ( PAN ), or perceived abnormal ( PA ), with associated confidence vectors ( C ). (3) Propose the quadruple oAI ,   AI ,   PN | PAN | PA ,   C as a tool to promote transparent communications between the surgeon and the patient. The degree of overall facial asymmetry ( oAI ) together with laypeople-deemed classification ( PN | PAN | PA ) articulate the severity of asymmetry. The asymmetry index vector ( AI ) depicts individual degrees of facial features of concern. The confidence vector ( C ) reveals laypeople’s voting distribution over the three classes for a face.

2. Materials and Methods

Ethical approval for this study was obtained from the Institutional Review Board of Chang Gung Memorial Hospital, Taiwan, R.O.C. (102-1359B and 103-3130B)
Figure 1 illustrates the procedure used to model the perceived facial asymmetry. On the top-left corner is a 3D normal face whose skin texture is removed, and facial landmarks are identified. The head coordinate system is then computed. Walking along the upper row and going downwards, the normal face is morphed to generate stimulus faces with varied degrees of asymmetry, with which the proposed visual questionnaire surveys are conducted. Each stimulus face receives votes from the respondents’ asymmetry perception and is categorized as PN, PAN, or PA, with associated confidence vector ( C ). Walking downwards on the left, the asymmetry index vector ( AI ) is computed and fed as the input to train the artificial neural network classifier to learn the categorization on the right. The overall asymmetry ( oAI ) is the weighted sum of individual asymmetry indices.
The remainder of this section depicts the proposed methods in details.

2.1. Acquisition of 3D Facial Images and Pre-Process

A 3dMD scanner, an ultra-fast non-invasive 3D cranial imaging system, is used to document high-precision facial surface with the capture speed of 1.5 ms per image. To prevent the respondents from being distracted by the subjects’ individual outlooks, facial hues, or skin quality, we decide to use only one subject’s 3D facial image, which is later morphed into a series of faces with different, controlled degrees of asymmetry. Furthermore, we remove the facial skin texture and make it monochrome. The inclusion criteria of the subject are of dental occlusion Angle Class I, no craniofacial deformity, no facial trauma history, no prior orthognathic surgery, and their face generally regarded as symmetric by three orthodontists.

2.2. Facial Landmarks and the Corresponding 3D Coordinate System

Twenty facial landmarks, including eight medial and six bilateral pairs, as shown in Table 1, are identified from the acquired facial image [6]. Each landmark, denoted as L(i), is associated with an ID (i). IDs with star superscript indicate bilateral landmarks.
To mathematically construct the 3D coordinate system, the mid-sagittal plane of the face is defined as the orthogonal to the vector E x r E x l that connects right and left Exocanthi (ID = 9*; E x l and E x r ) and passes through Nasion (ID = 2; N). The intersection of the mid-sagittal plane and the vector E x r E x l is denoted as N’, which can be considered as the projection point of N along the mid-sagittal plane. The proposed head coordinate system is thus defined as (N, x , y , z ), where N is the origin; x = E x r E x l / E x r E x l (pointing to the right); y = N N / N N (pointing into the head); z = x × y / x × y (pointing upwards). The coordinates of all 20 landmarks can then be determined against (N, x , y , z ). Note that these 20 facial landmarks are later identified as 14 facial features (ID = 1, 2, 3, … 14).

2.3. Visual Questionnaire Surveys

To generate transformed faces with varied degrees of asymmetry, we perform combined counter-clockwise rolls (rotations of the Y-axis) of the nose and the chin [7]. In total, 8 nose rolls together with 8 chin rolls yield 64 stimulus faces with off-the-mid-sagittal distances covering from normal to serious asymmetry [33], as shown in Table 2. In our study, the off-the-mid-sagittal distance ranges are 0.35–5.31 mm for the nose and 1.14–17.04 mm for the chin. For example, a stimulus denoted as n 09 c 03 indicates that the transformation involves 4.5° of nose roll and 1.68° of chin roll, and the corresponding Pronasale (Prn) and Menton (Me) displacements off the mid-sagittal plane are 3.19 mm (or δ x ( P r n ) ) and 3.42 mm (or δ x ( M e ) ), respectively. Figure 2a,b illustrate the facial coordinates and the counter-clockwise roll, and Figure 2c illustrates a series of deformed faces.
The 64 stimulus faces are randomly shuffled and compiled into a visual questionnaire. When conducting the questionnaires, each face is displayed for five seconds, sufficiently enough for a respondent to reach a decision [35,36]. A blank screen, lasting for two seconds, is shown between consecutive faces to remove residual stimuli. The study recruits 128 laypeople to serve as the questionnaire respondents. Respondents’ informed consents are obtained before taking the survey. A questionnaire survey would take less than 10 minutes, including time spent for opening, closing, and other miscellaneous preparations. Two simple questions for a face are asked: (Q1) Do you think this face is symmetrical? (Q2) Is it abnormally asymmetrical and you would consider a surgery? A face is rated as perceived normal (or PN ) if the answer to Q1 is YES. A face is rated as perceived asymmetrically normal (or PAN ) if the answer to Q1 is NO and the answer to Q2 is NO. A face is rated as perceived abnormal (or PA ) if the answer to Q1 is NO and the answer to Q2 is YES. All respondents’ answers are gathered, and each face is categorized as PN , PAN , or PA , if it receives the most votes for that class. Furthermore, the percentage of votes each face receives over PN , PAN , and PA , respectively, constitute the corresponding confidence vector ( C ). For example, a face receives 20% votes for PN , 30% votes for PAN , and 50% votes for PA . This face is therefore categorized as PA with a confidence vector C = [ 0.2 ,   0.3 , 0.5 ] T .

2.4. Asymmetry Classifier

The asymmetry classifier is an artificial neural network using back-propagation to adjust its inter-layer weights. The model has 14 nodes on its input layer, 10 nodes on the hidden layer, and 2 target nodes. The 14 facial features fed into the input layer nodes are the 14 asymmetry indices (AI’s) computed from the 20 landmarks as defined in Equation (1). The number of hidden layer nodes being 10 is determined by cross-validation and by employing the rule of thumb: (a) The number of hidden neurons is between the size of the input layer and the size of the output layer; (b) The number of hidden neurons is approximately two-thirds of the size of the input layer, plus the size of the output layer. The two target nodes correspond to the binary coding of the three possible asymmetry classes: PN, PAN, and PA.
An asymmetry index of a feature is its distance off the mid-sagittal plane if it is a medial landmark. On the other hand, the asymmetry index for a pair of bilateral landmarks is defined as the root sum squared (RSS) of the disparities of both landmarks on x-, y-, and z-directions. Equation (1) is calculated as follows:
AI i = { δ x ( L ( i ) ) ,   if   L ( i ) { G , N , P r n , S n , L s , L i , S t o , M e } ( ( L x l + L x r 2 M x 2 ) 2 + ( L y l L y r 2 ) 2 + ( L z l L z r 2 ) 2 ) 1 / 2 ,   if   L ( i ) { E x , E n , A l , C h , Z y , G o } ,
where L = L ( i ) represents the landmark with ID being i, as shown in Table 1. AI i denotes the asymmetry index of landmark L ( i ) { G , N , P r n , S n , L s , L i , S t o , M e } { E x , E n , A l , C h , Z y , G o } ; δ x ( L ) is the distance of landmark L off the mid-sagittal plane; L x l denotes the x-coordinate of landmark L(left); L x r is the x-coordinate of landmark L(right); L y l denotes the y-coordinate of landmark L(left); L y r is the y-coordinate of landmark L(right); L z l denotes the z-coordinate of landmark L(left); L z r is the z-coordinate of landmark L(right); M x denotes the x-coordinate of the mid-sagittal plane.
Specifically, in this study, we define mid-sagittal plane as the yz-plane, i.e., M x = 0 ; therefore, the definition of (1) can be simplified to Equation (2) as follows:
AI i = { δ x ( L ( i ) ) ,   if   L ( i ) { G , N , P r n , S n , L s , L i , S t o , M e } ( ( L x l + L x r 2 ) 2 + ( L y l L y r 2 ) 2 + ( L z l L z r 2 ) 2 ) 1 / 2 ,   if   L ( i ) { E x , E n , A l , C h , Z y , G o } .
During the training process, each face’s 14 AIs are fed as the input to the under-constructed classifier, which adjusts its inter-layer weight matrices of the input and hidden and hidden and output layers to generate the desired target that is encoded from the corresponding questionnaire results. In combination, the 14 AIs of a face form an asymmetry index vector, AI = [ AI 1 ,   AI 2 , , AI 14 ] T . The AI s of the 64 faces are iteratively applied until the classifier reaches convergence and its mean square error from the desired target is below a pre-defined threshold.

2.5. Overall Asymmetry Index (oAI) and Asymmetry Quadruple

The overall asymmetry index ( oAI ) of a face is defined as the weighted sum of the asymmetry indices or as the inner product of the asymmetry index vector and the relative importance vector. The weight associated with a facial feature pertains its relative importance ( RI = [ RI 1 ,   RI 2 , , RI 14 ] T ) extracted from the constructed asymmetry classifier and can be computed as Equation (3) [37]:
RI i = j = 1 n ( w i j i = 1 m w i j w j k ) i = 1 m ( j = 1 n ( w i j i = 1 m w i j w j k ) )   and   i = 1 m RI i = 1 ,
where RI i denotes the relative importance of landmark ID = i , as defined in Table 1; w i j represents the weight of the connection between node i on the input layer and node j on the hidden layer; w j k denotes the weight of the connection between node k on the output layer and node j on the hidden layer; i = 1 , 2 , ,   m   ( m = 14 ) ; j = 1 , 2 , ,   n   ( n = 10 ) ; individual weights are represented as unsigned magnitude values.
The overall asymmetry index of a face is then defined as Equation (4):
oAI = AI   ·   RI           or           oAI = i = 1 14 ( RI i ) ( AI i ) .
To put it all together, for a given face, overall asymmetry index, individual asymmetry indices (or asymmetry index vector), perceived asymmetry classification, and voting percentages of three categories (or associated confidence vector) constitute an asymmetry quadruple oAI ,   AI ,   PN | PAN | PA ,   C that quantitatively articulates the characteristics of the facial asymmetry.

3. Results

The study recruited 128 laypeople as the respondents and 113 (88.3%) of them replied with valid questionnaires, in which 43 (38%) are male and 70 (62%) are female. The invalid questionnaires include incomplete or apparently inconsistent answers, e.g., the respondent may consider a face as symmetrical but paradoxically call for a surgery. The questionnaire results present a Cronbach’s Alpha value of 0.944 [37]. The category of a face, PN , PAN , or PA , is determined by which category receives the most votes from the respondents. Among the 64 stimulus faces, 9 faces are classified as PN (14%), 15 faces are classified as PAN (23%), and 40 faces are classified as PA (63%). The distribution of face categories is aligned with the process of generating stimulus faces, which are deformed from a normal face and present varied degrees of facial asymmetry.
The asymmetry classifier models the respondents’ perceptions to predict the perceived asymmetry of a face. The input layer takes the asymmetry index vector (or 14 asymmetry indices) of the corresponding 20 facial landmarks. The 14 asymmetry indices ( AI i ,   i = 1 14 ) of a stimulus face are computed using Equation (2). The classifier is trained with 75%, validated with 15%, and tested with 15% of the 3D stimulus face images and yields an overall accuracy >99% (root mean squared error 0.000059225). The resulted inter-layer weight matrix between the input and hidden layers (denoted as I W 1 , 1 ) is detailed in Table 3. The inter-layer weight matrix between the hidden and output layers (denoted as L W 2 , 1 ) is detailed in Table 4. The bias vectors added to the hidden and output layers are listed in Table 5.
The relative importance RIs ( RI i ,   i = 1 14 ) are calculated using Equation (3) and are employed as the weights of facial features, respectively. As shown in Table 6, the RIs are rank-ordered, and it shows that alar curvature (Al) and subnasale (Sn) are relatively important when classifying perceived asymmetry: both having RIs greater than 0.10, which are significantly greater than those of the remaining facial features. It is also aligned with the proposed deformation of the normal face with the emphasis on the nose and chin regions, as suggested in [1,7,33].
The oAI of a stimulus face is a linear combination (SOP; sum of product) of the corresponding AI i and RI i , as shown in Equation (4). Table 7 presents the overall asymmetry index ( oAI ), classification of perceived asymmetry ( PN | PAN | PA ), and the confidence value of the classification (   C ^ : percentage of the voted classification) for the 64 stimulus faces. Please note that for brevity of demonstration, only partial asymmetry quadruple oAI ,   AI ,   PN | PAN | PA ,   C is shown: the detailed asymmetry index vector ( AI ) and detailed confidence vector ( C ) are excluded. For example, n 03 n 15 is a stimulus face with a 1.5 ° nose roll and a 8.42 ° chin roll (as depicted in Table 2) from the normal face. Its entry in Table 7 is shown as (6.5208, PA ,0.92), i.e., its oAI is 6.5208 and it is perceived as abnormal ( PA ) with a confidence vector of 0.92.
The range of oAI is from 0.8182 ( n 01 c 01 ) to 8.6792 ( n 15 c 15 ). The greater the oAI of a stimulus face, the greater the possibility of it being classified as PAN or PA .

4. Discussion

We transformed a normal face rather than using a series of realistic asymmetric faces to maintain perception stability of the questionnaire respondents, similar to the approach proposed by Meyer-Marcotty et al. [1]. The degree of deformation can therefore be computed to correspond to the respondents’ ratings over different faces. The facial texture is removed before deformation in our study to mitigate perception distractions by skin color, rashes, moles, etc.
Table 8 illustrates the classification results of the facial asymmetry for the 64 stimulus faces. The 8 × 8 matrix is arranged so that the left side shows varied degrees of nose rolls, whereas different degrees of chin rolls are shown on the top. As shown in Table 2, each increment of i denotes 0.5° of nasal roll, whereas each increment of j denotes 0.56° of chin roll. Presumably, the stimulus faces towards the upper left are likely to be categorized as PN , where those towards the lower right are likely to be classified as PAN or PA . The perceived asymmetry is, in general, aligned with this assumption, with the exceptions of the four red-shaded entries ( n 09 c 01 , n 11 c 01 , n 13 c 01 , and n 11 c 07 ), where there exhibit inconsistencies in the asymmetry classifications.
When exploring the confidence vector ( C ) of the four seemingly erroneous classifications, as depicted in Table 9, we find “ambiguous zones” lying around the PN–PAN and PAN–PA borders where there are competing values in corresponding confidence vectors. In Table 9, the highest confidence value in a confidence vector is starred (*); and the second highest confidence value is underlined. For example, for the stimulus face n 11 c 07 that is associated with confidence vector [ 0.15 ,   0.43 ,   0.42 ] T , its highest confidence value is starred (0.43*) and the second highest confidence value is underlined (0.42). The four stimulus faces demonstrate too-close-to-call situations. This means that respondents’ perceptions are divergent, and any classification decision may easily draw disagreement from the other camp. By reassigning the classifications of the four faces to the categories with the second highest votes, we obtain a revised matrix, as shown in Table 10, in which the red alerts are gone. By meticulously re-classifying these faces by examining the competing confidence values for the other stimulus faces, even though some of them are not red alerted here, we obtain similar results.
The ambiguous zones phenomenon of asymmetry perception suggests: (1) that misclassifications can happen and there are no univocally correct answers; (2) most importantly, that objective and transparent articulations of the asymmetry characteristics between the physician and the patient during the course of assessment, decision making, surgical planning, and post-op are needed to avoid or mitigate potential medical disputes.
Figure 3 visualizes the relationship between oAI and asymmetry classification. The stimulus faces are arranged along the horizontal axis so that stimulus faces of the same classified category are clustered together: PN on the left ( oAI : 1.88 ± 0.69), PAN in the middle ( oAI : 3.43 ± 0.71), PA on the right ( oAI : 5.89 ± 1.39). Within a category, the faces are sorted in order of oAI . Each graphic bar is associated with the confidence vector of a stimulus face. It illustrates the voting distribution (referring to the vertical percentage axis on the left) from the questionnaire surveys. Green segments relate to votes for PN , yellow segments to PAN , and red segments to PA . The solid black curve across the plot depicts the oAI s (referring to the oAI vertical axis on the right) of the corresponding stimulus faces.
The ambiguous zone phenomenon is again manifested in Figure 3. There are oAI drops around the inter-category areas ( PN PAN and PAN PA )—coinciding with where red alerts occur in Table 8. For example, the first oAI drop happens at n 09 c 01 , which is one of the four stimulus faces exhibiting red alerts in Table 8. It has an oAI of 2.258 but is perceptually classified as PAN instead of PN . Its votes for PN and PAN are close (0.35 versus 0.46), and the corresponding green and yellow segments are visually of similar lengths. Such inconsistent asymmetry classifications exist when respondents’ subjective opinions are divisive.
Limitations and Strengths. Although quantifying facial asymmetry can be used as a helpful ancillary utensil, it is not the sole decision maker for surgical decisions. Comprehensive clinician grading, laypeople evaluations, patient-reported outcomes, and healthy patient–physician communications are just as important. The study models facial asymmetry for 3D images; however, 2D facial images are still widely utilized by facial plastic and reconstructive surgeons. The proposed model can be adapted for 2D images by (1) establishing 2D facial coordinate systems; (2) identifying the mid-face line instead of mid-sagittal plane; (3) computing the point-to-line distance instead of the point-to-plane distance; and (4) removing the Z-axis components in Equations (1) and (2). Furthermore, when conducting the questionnaire survey, the respondents rate asymmetry on faces with skin texture removed. Facial asymmetry among different races or different genders is not studied. Pre- and post-surgical asymmetry comparisons and how prominent facial features, such as skin colors, moles, mustaches, attractiveness, pimples, etc., affect the ratings of facial asymmetry are beyond the scope of this study.

5. Conclusions

In this study, we construct an artificial neural network model to address the perception of facial asymmetry. The resulted asymmetry quadruple oAI ,   AI ,   PN | PAN | PA ,   C can serve as a tool to establish more transparent communications between the physician and the patient and alleviate the anticipation gaps between the two parties. The oAI is an overall score of facial asymmetry and is a weighted sum of asymmetry indices of individual facial features. Before making surgical decision, the patient can weigh on their own asymmetry characteristics in terms of the asymmetry quadruple oAI ,   AI ,   PN | PAN | PA ,   C . The ambiguous zone phenomenon should be taken into account as well. With such practice, the patients are more involved in the process and analysis of surgical decision making and undertake more knowledgeable risks. On the other hand, it is the physician’s responsibility to properly address the patient’s asymmetry characteristics and perform an oAI -improving (or lower post-op oAI) surgery. Quantifying facial asymmetry can serve as an advisory tool during the surgical decision process, alongside comprehensive clinician grading, laypeople evaluations, and patient-reported outcomes.
Finally, the visualization of the ambiguous zones of asymmetry perception as depicted in Table 8 and Figure 3 helps explain why certain medical disputes are difficult to avoid. A thorough articulation of the proposed asymmetry quadruples oAI ,   AI ,   PN | PAN | PA ,   C is expected to improve physician–patient relationship.

6. Patents

The concept and preliminary study of this work have been awarded an invention patent (no. I595430) by the Intellectual Property Office, Ministry of Economic Affairs, Taiwan, ROC.

Author Contributions

Data curation, S.-Y.W., P.-Y.T., and L.-J.L.; formal analysis, S.-Y.W.; funding acquisition, S.-Y.W.; investigation, S.-Y.W. and L.-J.L.; methodology, S.-Y.W. and P.-Y.T.; project administration, S.-Y.W.; resources, S.-Y.W. and L.-J.L.; software, S.-Y.W. and P.-Y.T.; supervision, S.-Y.W.; validation, S.-Y.W. and L.-J.L.; visualization, S.-Y.W. and P.-Y.T.; writing—original draft, S.-Y.W. and P.-Y.T.; writing—review and editing, S.-Y.W. and L.-J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Chang Gung Memorial Hospital, Taiwan, under Grants CRRPD5C0251-3 and CRRPD3G0011 and by the Ministry of Science and Technology, Taiwan, under Grants 105-2221-E-182-013, and 106-2221-E-182-026.

Institutional Review Board Statement

Ethical approval for this study was obtained from the Institutional Review Board of Chang Gung Memorial Hospital, Taiwan, R.O.C. (102-1359B and 103-3130B).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Some of supporting materials can be found at https://github.com/sywan/perceived-asymmetry (accessed on 9 September 2021).

Acknowledgments

The authors are grateful to the physicians and staff at the Craniofacial Research Center of Chang Gung Memorial Hospital for their administrative and technical support.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Meyer-Marcotty, P.; Stellzig-Eisenhauer, A.; Bareis, U.; Hartmann, J.; Kochel, J. Three-dimensional perception of facial asymmetry. Eur. J. Orthod. 2011, 33, 647–653. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Rubenstein, A.J.; Kalakanis, L.; Langlois, J.H. Infant preferences for attractive faces: A cognitive explanation. Dev. Psychol. 1999, 35, 848–855. [Google Scholar] [CrossRef] [PubMed]
  3. Bronstad, P.M.; Russell, R. Beauty is in the ‘we’ of the beholder: Greater agreement on facial attractiveness among close relations. Perception 2007, 36, 1674–1681. [Google Scholar] [CrossRef]
  4. Bengtsson, M.; Wall, G.; Miranda-Burgos, P.; Rasmusson, L. Treatment outcome in orthognathic surgery—A prospective comparison of accuracy in computer assisted two and three-dimensional prediction techniques. J. Craniomaxillofac. Surg. 2017. [Google Scholar] [CrossRef]
  5. Cheong, Y.W.; Lo, L.J. Facial asymmetry: Etiology, evaluation, and management. Chang. Gung Med. J. 2011, 34, 341–351. [Google Scholar]
  6. Farkas, L.G. Anthropometry of the Head and Face; Raven Press: New York, NY, USA, 1994. [Google Scholar]
  7. Huang, C.S.; Liu, X.Q.; Chen, Y.R. Facial asymmetry index in normal young adults. Orthod. Craniofacial Res. 2013, 16, 97–104. [Google Scholar] [CrossRef] [PubMed]
  8. Jackson, T.H.; Mitroff, S.R.; Clark, K.; Proffit, W.R.; Lee, J.Y.; Nguyen, T.T. Face symmetry assessment abilities: Clinical implications for diagnosing asymmetry. Am. J. Orthod. Dentofac. Orthop. 2013, 144, 663–671. [Google Scholar] [CrossRef] [Green Version]
  9. Alqattan, M.; Djordjevic, J.; Zhurov, A.I.; Richmond, S. Comparison between landmark and surface-based three-dimensional analyses of facial asymmetry in adults. Eur. J. Orthod. 2015, 37, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Thiesen, G.; Gribel, B.F.; Freitas, M.P. Facial asymmetry: A current review. Dent. Press J. Orthod. 2015, 20, 110–125. [Google Scholar] [CrossRef]
  11. Chin, Y.P.; Leno, M.B.; Dumrongwongsiri, S.; Chung, K.H.; Lin, H.H.; Lo, L.J. The pterygomaxillary junction: An imaging study for surgical information of LeFort I osteotomy. Sci. Rep. 2017, 7, 9953. [Google Scholar] [CrossRef] [Green Version]
  12. Peacock, Z.S.; Lee, C.C.; Klein, K.P.; Kaban, L.B. Orthognathic surgery in patients over 40 years of age: Indications and special considerations. J. Oral Maxillofac. Surg. 2014, 72, 1995–2004. [Google Scholar] [CrossRef]
  13. Chiang, W.-C.; Lin, H.-H.; Huang, C.-S.; Lo, L.-J.; Wan, S.-Y. The cluster assessment of facial attractiveness using fuzzy neural network classifier based on 3D Moiré features. Pattern Recognit. 2014, 47, 1249–1260. [Google Scholar] [CrossRef]
  14. ISAPS. ISAPS Global Statistics. Available online: http://www.isaps.org/news/isaps-global-statistics (accessed on 13 August 2021).
  15. Xu, Z.P.; Zhang, J.J.; Yan, N.; Yingying, H. Treatment Equality May Lead to Harmonious Patient-Doctor Relationship during COVID-19 in Mobile Cabin Hospitals. Front. Public Health 2021, 9, 557646. [Google Scholar] [CrossRef]
  16. Chen, C.; Lin, C.F.; Chen, C.C.; Chiu, S.F.; Shih, F.Y.; Lyu, S.Y.; Lee, M.B. Potential media influence on the high incidence of medical disputes from the perspective of plastic surgeons. J. Med. Assoc. 2017, 116, 634–641. [Google Scholar] [CrossRef]
  17. Amirthalingam, K. Medical dispute resolution, patient safety and the doctor-patient relationship. Singap. Med. J. 2017, 58, 681–684. [Google Scholar] [CrossRef] [Green Version]
  18. Zeng, Y.; Zhang, L.; Yao, G.; Fang, Y. Analysis of current situation and influencing factor of medical disputes among different levels of medical institutions based on the game theory in Xiamen of China: A cross-sectional survey. Medicine 2018, 97, e12501. [Google Scholar] [CrossRef]
  19. Aoki, N.; Uda, K.; Ohta, S.; Kiuchi, T.; Fukui, T. Impact of miscommunication in medical dispute cases in Japan. Int. J. Qual. Health Care 2008, 20, 358–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Department of Health, T.C.G. Public Health of Taipei City Annual Report. Available online: https://english.doh.gov.taipei/News_Content.aspx?n=63300A6F51400770&sms=15A8D5A7A6A5F2DC&s=BC2E2C9A4BDDB232 (accessed on 13 August 2021).
  21. Chu, E.A.; Farrag, T.Y.; Ishii, L.E.; Byrne, P.J. Threshold of visual perception of facial asymmetry in a facial paralysis model. Arch. Facial Plast. Surg. 2011, 13, 14–19. [Google Scholar] [CrossRef]
  22. Naini, F.B.; Donaldson, A.N.; McDonald, F.; Cobourne, M.T. Assessing the influence of asymmeftry affecting the mandible and chin point on perceived attractiveness in the orthognathic patient, clinician, and layperson. J. Oral Maxillofac. Surg. 2012, 70, 192–206. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, L.; Zhang, D.; Sun, M.-M.; Chen, F.-M. Facial beauty analysis based on geometric feature: Toward attractiveness assessment application. Expert Syst. Appl. 2017, 82, 252–265. [Google Scholar] [CrossRef]
  24. Masuoka, N.; Muramatsu, A.; Ariji, Y.; Nawa, H.; Goto, S.; Ariji, E. Discriminative thresholds of cephalometric indexes in the subjective evaluation of facial asymmetry. Am. J. Orthod. Dentofac. Orthop. 2007, 131, 609–613. [Google Scholar] [CrossRef]
  25. Ferrario, V.F.; Sforza, C.; Schmitz, J.H.; Santoro, F. Three-dimensional facial morphometric assessment of soft tissue changes after orthognathic surgery. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 1999, 88, 549–556. [Google Scholar] [CrossRef]
  26. Hajeer, M.Y.; Ayoub, A.F.; Millett, D.T. Three-dimensional assessment of facial soft-tissue asymmetry before and after orthognathic surgery. Br. J. Oral Maxillofac. Surg. 2004, 42, 396–404. [Google Scholar] [CrossRef]
  27. Djordjevic, J.; Pirttiniemi, P.; Harila, V.; Heikkinen, T.; Toma, A.M.; Zhurov, A.I.; Richmond, S. Three-dimensional longitudinal assessment of facial symmetry in adolescents. Eur. J. Orthod. 2013, 35, 143–151. [Google Scholar] [CrossRef] [Green Version]
  28. Cevidanes, L.H.; Bailey, L.J.; Tucker, S.F.; Styner, M.A.; Mol, A.; Phillips, C.L.; Proffit, W.R.; Turvey, T. Three-dimensional cone-beam computed tomography for assessment of mandibular changes after orthognathic surgery. Am. J. Orthod. Dentofac. Orthop. 2007, 131, 44–50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Hsu, P.J.; Denadai, R.; Pai, B.C.J.; Lin, H.H.; Lo, L.J. Outcome of facial contour asymmetry after conventional two-dimensional versus computer-assisted three-dimensional planning in cleft orthognathic surgery. Sci. Rep. 2020, 10, 2346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Lo, L.J.; Yang, C.T.; Ho, C.T.; Liao, C.H.; Lin, H.H. Automatic Assessment of 3-Dimensional Facial Soft Tissue Symmetry Before and After Orthognathic Surgery Using a Machine Learning Model: A Preliminary Experience. Ann. Plast. Surg. 2021, 86, S224–S228. [Google Scholar] [CrossRef]
  31. Lee, M.S.; Chung, D.H.; Lee, J.W.; Cha, K.S. Assessing soft-tissue characteristics of facial asymmetry with photographs. Am. J. Orthod. Dentofac. Orthop. 2010, 138, 23–31. [Google Scholar] [CrossRef]
  32. Padwa, B.L.; Kaiser, M.O.; Kaban, L.B. Occlusal cant in the frontal plane as a reflection of facial asymmetry. J. Oral Maxillofac. Surg. 1997, 55, 811–816. [Google Scholar] [CrossRef]
  33. McAvinchey, G.; Maxim, F.; Nix, B.; Djordjevic, J.; Linklater, R.; Landini, G. The perception of facial asymmetry using 3-dimensional simulated images. Angle Orthod. 2014, 84, 957–965. [Google Scholar] [CrossRef] [PubMed]
  34. Yamamoto, M.; Takaki, T.; Shibahara, T. Assessment of facial asymmetry based by subjective evaluation and cephalometric measurement. J. Oral Maxillofac. Surg. Med. Pathol. 2012, 24, 11–17. [Google Scholar] [CrossRef]
  35. Chamorro-Premuzic, T.; Reichenbacher, L. Effects of personality and threat of evaluation on divergent and convergent thinking. J. Res. Personal. 2008, 42, 1095–1101. [Google Scholar] [CrossRef]
  36. Cropley, A. In Praise of Convergent Thinking. Creat. Res. J. 2006, 18, 391–404. [Google Scholar] [CrossRef]
  37. Garson, G.D. Interpreting neural-network connection weights. AI Expert 1991, 6, 46–51. [Google Scholar]
Figure 1. Procedure of composing the facial asymmetry quadruple.
Figure 1. Procedure of composing the facial asymmetry quadruple.
Applsci 11 08398 g001
Figure 2. Procedure of composing the facial asymmetry quadruple. (a) Facial coordinates; (b) Counter-clockwise roll around the Z-axis with Nasion (N) as the origin; (c) Deformed faces.
Figure 2. Procedure of composing the facial asymmetry quadruple. (a) Facial coordinates; (b) Counter-clockwise roll around the Z-axis with Nasion (N) as the origin; (c) Deformed faces.
Applsci 11 08398 g002
Figure 3. oAI vs. asymmetry classification.
Figure 3. oAI vs. asymmetry classification.
Applsci 11 08398 g003
Table 1. Twenty facial landmarks, abbreviations, and definitions.
Table 1. Twenty facial landmarks, abbreviations, and definitions.
LandmarksID (i)L(i)Definition
Glabella1GMost prominent midline point between eyebrows
Nasion2NDeepest point of nasal bridge
Pronasale3PrnMost protruded point of the apex nasi
Subnasale4SnMidpoint of angle at columella base
Labial (superius)5LsMidpoint of the upper vermilion line
Labial (inferius)6LiMidpoint of the lower vermilion line
Stomion7StoMidpoint of the mouth orifice
Menton8MeMost inferior point on chin
Exocanthion
(left and right)
9 *ExOuter commissure of the eye fissure
Endocanthion
(left and right)
10 *EnInner commissure of the eye fissure
Alar curvature
(Left and Right)
11 *AlMost lateral point on alar contour
Cheilion
(left and right)
12 *ChPoint located at lateral labial commissure
Zygion
(left and right)
13 *ZyThe most lateral extents of the zygomatic arches
Gonion
(left and right)
14 *GoThe inferior aspect of the mandible at its most acute point
* Bilateral landmarks.
Table 2. Roll rotations of the nose and the chin.
Table 2. Roll rotations of the nose and the chin.
Nose(n)Chin(c)
n i   or   c j θ(°) δ x ( P r n ) ( mm ) θ(°) δ x ( M e ) ( mm )
010.50.350.561.14
031.51.061.683.42
052.51.772.815.70
073.52.483.937.97
094.53.195.0510.25
115.53.906.1712.52
136.54.607.2914.78
157.55.318.4217.04
Table 3. Weight matrix between input and hidden layers ( [ I W 1 , 1 ] 10 × 14 ).
Table 3. Weight matrix between input and hidden layers ( [ I W 1 , 1 ] 10 × 14 ).
0.1572 −0.9305 0.1164 0.1997 −0.4651 −0.6731 0.1136 0.2616 −0.6093 −1.0658 −0.6998 −0.3851 −1.0148 −0.4178
1.0199 1.0967 −0.8636 −0.8685 0.3984 −1.3503 −1.8537 −1.8547 0.9310 0.0100 0.0306 −2.0512 1.0864 −1.0609
−1.1120 0.1507 −1.4890 −2.3591 −0.1408 0.9426 0.7155 1.1847 −0.4150 −0.4978 −2.3130 0.2910 −1.2981 0.2095
0.6825 0.0520 −0.1678 −1.4755 0.5084 −0.6806 0.1095 −0.5615 0.1468 0.4848 −0.7662 0.4254 0.4785 −0.5332
0.0814 0.2938 −1.8753 −2.6104 −0.4818 −2.1416 −1.1659 −1.1171 0.4416 0.4628 −2.5384 −2.0989 0.3153 −2.1255
−0.4233 −0.3582 0.3842 0.4808 0.1527 −1.2400 −0.5632 −1.4579 −0.1468 −0.3758 0.1567 −0.7575 0.7093 −1.7727
0.0191 0.2685 3.0053 2.9216 0.3100 0.3137 −0.4040 −1.0149 0.8525 0.5245 2.1812 −0.4173 −0.3188 0.1759
0.2811 0.9569 0.2252 0.0875 0.4633 −0.2850 −0.6547 −0.4486 0.6601 0.1391 0.7050 0.9174 0.0115 0.1564
1.1533 0.9079 0.4449 0.8827 0.7849 −0.9138 −1.5615 −0.8682 0.2965 −0.2506 0.5101 −1.6634 0.1175 −0.9619
0.1950 0.1871 0.8557 0.6553 −0.1134 0.0447 0.0944 0.4108 −0.3876 −0.7262 1.6998 0.3650 −0.0469 −0.0631
Table 4. Weight matrix between hidden and output layers ( [ L W 2 , 1 ] 2 × 10 ).
Table 4. Weight matrix between hidden and output layers ( [ L W 2 , 1 ] 2 × 10 ).
1.0615 0.4904 2.1091 −1.0133 −1.5437 −0.6059 2.7268 −1.1142 −3.4308 1.6700
1.1757 4.4862 −3.4968 −1.2510 −4.3407 −4.1067 −2.1687 −1.2679 −2.1432 0.6080
Table 5. Bias to hidden and output layers.
Table 5. Bias to hidden and output layers.
Hidden−1.2863 −1.8825 1.2718 −0.9064 −0.3507 −0.4427 −0.3438 0.7215 0.8118 2.0438
Output0.2226 2.4557
Table 6. Relative importance of the 14 facial features.
Table 6. Relative importance of the 14 facial features.
i1234567891011121314
L(i)GNPrnSnLsLiStoMeExEnAlChZyGo
RIi (%)5.15.78.211.14.17.86.48.65.15.411.78.85.36.8
Ranking1395214684121013117
Table 7. Sixty-four stimulus faces and associated minimal asymmetry characteristics.
Table 7. Sixty-four stimulus faces and associated minimal asymmetry characteristics.
n i c j   ( oAI ,   PN | PAN | PA ,   C ^ ) n i c j   ( oAI ,   PN | PAN | PA ,   C ^ )
n 01 c 01 (0.8182, PN , 0.88) n 09 c 01 (2.2582, PAN , 0.46)
n 01 c 03 (1.5815, PN , 0.74) n 09 c 03 (3.0214, PAN , 0.45)
n 01 c 05 (2.3462, PAN , 0.53) n 09 c 05 (3.7861, PAN , 0.52)
n 01 c 07 (3.1108, PAN , 0.51) n 09 c 07 (4.5507, PA , 0.65)
n 01 c 09 (3.8749, PA , 0.73) n 09 c 09 (5.3148, PA , 0.77)
n 01 c 11 (4.6382, PA , 0.86) n 09 c 11 (6.0782, PA , 0.70)
n 01 c 13 (5.4006, PA , 0.83) n 09 c 13 (6.8405, PA , 0.76)
n 01 c 15 (6.1618, PA, 0.93) n 09 c 15 (7.6018, PA , 0.93)
n 03 c 01 (1.1772, PN , 0.80) n 11 c 01 (2.6179, PA , 0.35)
n 03 c 03 (1.9404, PN , 0.79) n 11 c 03 (3.3811, PAN , 0.42)
n 03 c 05 (2.7051, PAN , 0.51) n 11 c 05 (4.1459, PAN , 0.37)
n 03 c 07 (3.4697, PAN , 0.53) n 11 c 07 (4.9105, PAN , 0.43)
n 03 c 09 (4.2338, PA , 0.60) n 11 c 09 (5.6746, PA , 0.74)
n 03 c 11 (4.9972, PA , 0.73) n 11 c 11 (6.4379, PA , 0.80)
n 03 c 13 (5.7595, PA , 0.91) n 11 c 13 (7.2003, PA , 0.87)
n 03 c 15 (6.5208, PA , 0.92) n 11 c 15 (7.9615, PA , 0.88)
n 05 c 01 (1.5375, PN , 0.66) n 13 c 01 (2.9771, PN , 0.42)
n 05 c 03 (2.3007, PN , 0.46) n 13 c 03 (3.7403, PAN , 041)
n 05 c 05 (3.0654, PAN , 0.50) n 13 c 05 (4.5050, PA , 0.48)
n 05 c 07 (3.8300, PAN , 0.53) n 13 c 07 (5.2697, PA , 0.67)
n 05 c 09 (4.5941, PA , 0.53) n 13 c 09 (6.0338, PA , 0.61)
n 05 c 11 (5.3575, PA , 0.87) n 13 c 11 (6.7971, PA , 0.69)
n 05 c 13 (6.1199, PA , 0.91) n 13 c 13 (7.5595, PA , 0.90)
n 05 c 15 (6.8811, PA , 0.91) n 13 c 15 (8.3207, PA , 0.90)
n 07 c 01 (1.8979, PN , 0.54) n 15 c 01 (3.3356, PA , 0.61)
n 07 c 03 (2.6612, PN , 0.56) n 15 c 03 (4.0988, PA , 0.58)
n 07 c 05 (3.4259, PAN , 0.43) n 15 c 05 (4.8635, PA , 0.58)
n 07 c 07 (4.1905, PAN , 0.51) n 15 c 07 (5.6281, PA , 0.58)
n 07 c 09 (4.9546, PA , 0.52) n 15 c 09 (6.3922, PA , 0.81)
n 07 c 11 (5.7179, PA , 0.82) n 15 c 11 (7.1556, PA , 0.78)
n 07 c 13 (6.4803, PA , 0.86) n 15 c 13 (7.9180, PA , 0.86)
n 07 c 15 (7.2415, PA , 0.89) n 15 c 15 (8.6792, PA , 0.91)
Table 8. Classifications of respondents’ perceived asymmetry (pre-revision).
Table 8. Classifications of respondents’ perceived asymmetry (pre-revision).
n i c j j   = 01 j = 03 j = 05 j = 07 j = 09 j = 11 j = 13 j = 15
i = 01PNPNPANPANPAPAPAPA
i   = 03PNPNPANPANPAPAPAPA
i = 05PNPNPANPANPAPAPAPA
i   = 07PNPNPANPANPAPAPAPA
i   = 09PANPANPANPAPAPAPAPA
i   = 11PAPANPANPANPAPAPAPA
i   = 13PNPANPAPAPAPAPAPA
i = 15PAPAPAPAPAPAPAPA
Table 9. Confidence vectors of the four likely misclassified stimulus faces.
Table 9. Confidence vectors of the four likely misclassified stimulus faces.
C PNPANPA
n 09 c 01 0.350.46 *0.19
n 11 c 01 0.330.320.35 *
n 13 c 01 0.42 *0.210.36
n 11 c 07 0.150.43 *0.42
Stars (*) denote the highest confidence; underlines (_) denote the second-highest confidence for a face.
Table 10. Classifications of respondents’ perceived asymmetry (post-revision).
Table 10. Classifications of respondents’ perceived asymmetry (post-revision).
n i c j j = 01 j = 03 j = 05 j = 07 j = 09 j = 11 j = 13 j = 15
i   = 01PNPNPANPANPAPAPAPA
i   = 03PNPNPANPANPAPAPAPA
i   = 05PNPNPANPANPAPAPAPA
i   = 07PNPNPANPANPAPAPAPA
i   = 09PNPANPANPAPAPAPAPA
i   = 11PNPANPANPAPAPAPAPA
i   = 13PAPANPAPAPAPAPAPA
i = 15PAPAPAPAPAPAPAPA
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wan, S.-Y.; Tsai, P.-Y.; Lo, L.-J. Quantifying Perceived Facial Asymmetry to Enhance Physician–Patient Communications. Appl. Sci. 2021, 11, 8398. https://doi.org/10.3390/app11188398

AMA Style

Wan S-Y, Tsai P-Y, Lo L-J. Quantifying Perceived Facial Asymmetry to Enhance Physician–Patient Communications. Applied Sciences. 2021; 11(18):8398. https://doi.org/10.3390/app11188398

Chicago/Turabian Style

Wan, Shu-Yen, Pei-Ying Tsai, and Lun-Jou Lo. 2021. "Quantifying Perceived Facial Asymmetry to Enhance Physician–Patient Communications" Applied Sciences 11, no. 18: 8398. https://doi.org/10.3390/app11188398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop