Next Article in Journal
Joint Torque Prediction via Hybrid Neuromusculoskeletal Modelling during Gait Using Statistical Ground Reaction Estimates: An Exploratory Study
Next Article in Special Issue
Toward Capturing Scientific Evidence in Elderly Care: Efficient Extraction of Changing Facial Feature Points
Previous Article in Journal
Design of a Low-Cost Small-Size Fluxgate Sensor
Previous Article in Special Issue
Polynomial, Neural Network, and Spline Wavelet Models for Continuous Wavelet Transform of Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Analysis of Face Images as a Screening Tool for Genetic Syndromes

by
Maciej Geremek
1 and
Krzysztof Szklanny
2,*
1
Department of Medical Genetics, Institute of Mother and Child, 01-211 Warsaw, Poland
2
Multimedia Department, Polish-Japanese Academy of Information Technology, 02-008 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(19), 6595; https://doi.org/10.3390/s21196595
Submission received: 19 August 2021 / Revised: 27 September 2021 / Accepted: 28 September 2021 / Published: 2 October 2021
(This article belongs to the Special Issue Analytics and Applications of Audio and Image Sensing Techniques)

Abstract

:
Approximately 4% of the world’s population suffers from rare diseases. A vast majority of these disorders have a genetic background. The number of genes that have been linked to human diseases is constantly growing, but there are still genetic syndromes that remain to be discovered. The diagnostic yield of genetic testing is continuously developing, and the need for testing is becoming more significant. Due to limited resources, including trained clinical geneticists, patients referred to clinical genetics units must be accurately selected. Around 30–40% of genetic disorders are associated with specific facial characteristics called dysmorphic features. As part of our research, we analyzed the performance of classifiers based on deep learning face recognition models in detecting dysmorphic features. We tested two classification problems: a multiclass problem (15 genetic disorders vs. controls) and a two-class problem (disease vs. controls). In the multiclass task, the best result reached an accuracy level of 84%. The best accuracy result in the two-class problem reached 96%. More importantly, the binary classifier detected disease features in patients with diseases that were not previously present in the training dataset. The classifier was able to generalize differences between patients and controls, and to detect abnormalities without information about the specific disorder. This indicates that a screening tool based on deep learning and facial recognition could not only detect known diseases, but also detect patients with diseases that were not previously known. In the future, this tool could help in screening patients before they are referred to the genetic unit.

1. Introduction

Rare diseases occur with a population frequency of less than 1:2000 [1], and since they encompass more than 8000 forms, more than 30 million patients suffer from rare diseases in Europe alone. More than 72% of these disorders have a genetic background. The human genome encodes more than 22,000 genes, and almost 7500 of them have been linked to human diseases [2]. The diagnostic yield of genetic testing has substantially evolved from targeted single gene sequencing to next generation sequencing (NGS) that enables the analysis of thousands of genes in one experiment. However, the analytical abilities are still limited due to extensive genetic variability and large amounts of data generated by these technologies. A correct interpretation of the sequencing data still has to be based on the clinical evaluation of the patient.
Genetic diseases can affect any system or organ in the human body. Around 30–40% of these disorders are associated with specific facial characteristics called dysmorphic features [3]. In many cases, the dysmorphic features result from abnormal embryological development, i.e., in craniosynostosis, gene mutations cause a premature closure of cranial sutures that leads to changes in the skull shape and the appearance of the facial skeleton. Dysmorphic features are not the only symptom of genetic diseases. Other symptoms associated with genetic disorders, such as intellectual impairment, autism, and organ anomalies have a crucial role in defining the severity of the disease, the diagnostic scheme, and disease management. Dysmorphic assessment is an essential part of a genetic consultation. However, it requires much experience and is often very individual.
Recently, high-throughput molecular biology techniques, such as NGS, have been introduced into clinical testing. This extended the number of indications for genetic consultations and genetic trials. However, in many countries, the number of trained clinical geneticists is insufficient, e.g., in Poland, a country with a population of 38 million, there are 146 clinical geneticists. In some national genetic centers, the waiting time for an appointment is 2.5–3 years. Moreover, the diagnostic yield of genetic testing is still limited in many ailments. For example, congenital heart disease is the most common birth defect, affecting nearly 10 to 12 per 1000 live-born infants (1–1.2%). The diagnostic yield of high-throughput genetic testing in nonsyndromic congenital heart disease is only ~10%. In syndromic cases, where heart disease is associated with other abnormalities, the success rate of genetic testing increases by ~30% [4,5]. Therefore, it seems crucial that the patients that are referred to clinical genetic units are selected correctly.
We performed a series of experiments to evaluate whether computer-aided facial analysis techniques could be used to detect genetic syndromes with dysmorphic features. Many studies that have been performed in the field were focused on the challenging task of establishing diagnosis based on facial pictures only. As there are still new genetic diseases to be discovered, in the approach we undertook, we have focused on the ability of the system to classify as abnormal syndromes that were not present in the trial dataset. The multiclass classification into 15 diseases and a control class had the best accuracy of 84% in our setup. The binary classification (disease vs. control) accuracy of ~96% was achieved on our dataset composed of patients suffering from 15 genetic diseases and control individuals. The system performed well in classifying patients suffering from diseases that were not removed from the training dataset, indicating that a sensitive screening tool is actually very feasible.
The system presented can be easily moved to the Raspberry Pi computer platform. Various components, such as a high-quality camera, can be attached to the Raspberry Pi. The camera would take a photo of the patient, and then a neural network would analyze the features of the picture to classify for genetic disease. Such a solution will allow the implementation of a system that operates on a mobile device, which we are currently working on.

2. State of the Art

The diagnostic analysis of facial pictures started with manual annotation of landmark points and use of neural networks for binary classification of images into Down syndrome and healthy controls [6]. A geometric analysis of 2D landmark points with local texture information enabled classification of patients with six disorders with a relatively high accuracy of 75% [7]. However, subsequent analysis performed by the same authors revealed that the classification success depends on the condition in which the picture was taken [8]. Balliu et al. analyzed 205 images of patients with 14 diseases [9]. Manually localized landmark points were processed by Delaunay triangulation. The distances and angles of the resulting net were used to perform classification with multinomial regression and an elastic net penalty with an accuracy of 62%. Automatic landmark detection methods such as Constrained Local Model and Active Appearance Model were also successfully applied in the genetic field [10]. For example, Zhao et al. achieved 96% binary classification accuracy for Down syndrome, and 97% for 24 syndromic subjects and 80 healthy subjects [11]. Support vector machines (SVM) [12] trained with 44 automatically detected landmark points recognized Noonan syndrome with 89% accuracy [13]. A system trained with 130 points, and texture information encoded using local binary patterns, outperformed a trained clinician in classifying patients with Cornelia de Lange (87% vs. 77%) [14]. In a seminal paper, Ferry et al. applied an Active Appearance Model for average syndrome face feature extraction with clustering, to classify patients with eight syndromes with very high accuracy [15]. Moreover, the authors showed that patients with diseases unknown to the system might be closer to the affected than to the controls in the so-called Clinical Face Phenotype Space. DeepGestalt is an algorithm-based convolutional neural network that uses 26,190 pictures of patients [16]. DeepGestalt achieves a 91%, top-10 accuracy in identifying over 215 genetic syndromes and is implemented in Face2Gene software, that is currently the most widely used application in clinical settings.

3. Materials and Methods

3.1. Picture Database

Facial images of patients suffering from 15 genetic disorders were collected by searching PubMed and social media sites of foundations and organizations devoted to specific diseases. The resulting dataset was composed of 101 pictures of patients with 22q11 microdeletion syndrome, 90 with Angelman syndrome, 25 with Coffin–Lowry syndrome, 35 with Cornelia de Lange syndrome, 33 with Crouzon syndrome, 86 with Down syndrome, 26 with Fragile X syndrome, 118 with KBG syndrome, 31 with Kabuki syndrome, 63 with Mowat–Wilson syndrome, 86 with Noonan syndrome, 65 with Pitt–Hopkins, 112 with Smith–Lemli–Opitz syndrome, 38 with Wideman–Steinert syndrome and 32 with Williams syndrome. Face images of individuals aged between 5 and 12 years old from the UTKFace were used as the control dataset [17]. In addition, for binary classification into patients and healthy controls, 2101 pictures were available, with 941 individuals suffering from one of the genetic diseases and 1160 healthy individuals.

3.2. Face Classification

In the first stage of face classification (Figure 1), a detector was used to localize the coordinates of a box containing the face and align it. Face detection and alignment is a computer vision problem that involves finding faces in photos and obtaining a canonical alignment of the face based on geometric transformation. We evaluated two types of detectors: Multi-task Cascaded Convolutional Neural Network (MTCNN) [18] and a detector based on histograms of oriented gradient (HOG) [19] implemented in the dlib library [20]. Briefly, the method based on HOG divides an image into small connected cells, computes a histogram for each cell, and combines the histograms into a unique feature vector. In the last stage, the support vector machine’s learning algorithm is trained with feature vectors to perform face detection in test images. The HOG-based detector is known to perform well on frontal images. However, since our dataset also contained images with some degree of head rotation, we evaluated the performance of one of the deep learning methods introduced more recently, and have achieved very good results on benchmark face detection datasets. We used MTCNN, a state-of-the-art algorithm that has been shown to perform well on images taken in an unconstrained environment, with various poses, illuminations, and occlusions. MTCNN uses three convolutional neural networks: P-Net, R-Net, and O-Net. In the first step, an image is resized to create multiple copies of different sizes. The copies are fed into the P-Net, that generates candidate bounding boxes containing faces. Low confidence boxes are rejected. Next, the number of boxes is reduced by the Non-maximum suppression (NMS) method, based on the confidence score and position in the original image coordinate system. The process of NMS is based on bounding boxes, and merging regression in the second stage, involving the R-Net. In the last step, involving the O-Net, high-confidence face bounding boxes and 5 landmark points are generated. Next, several types of neural networks trained in face recognition were used to create face embeddings. Deep Learning methods, especially Convolutional Neural Networks (CNNs), have been shown to be very successful in many computer vision problems such as recognition and classification of images. Multilayer neural networks are able to learn spatial hierarchies of features automatically through the backpropagation algorithm. Convolutional Neural Networks outperform other face recognition methods on many benchmark datasets. They generate a face embedding feature vector that describes the analyzed face, and can be directly used for comparison and identification. We utilized the DeepFace framework, and evaluated the performance of 7 state-of-the-art deep learning models: VGG-Face, Facenet, Facenet512, OpenFace, DeepFace, DeepID, and ArcFace [21]. Face embedding is a vector that represents the features extracted from the face, and can be used for face recognition, face identification, and clustering to group like-faces. Generated face embeddings were used to train classifiers in two scenarios: a binary problem consisting in classification into disease group and normal group, and a multiclass problem responsible for detecting a specific genetic syndrome. A support vector machine (SVM) classifier was used for both tasks. Support vector machines are a set of supervised learning algorithms that can be used for classification, regression, and outliers detection. SVMs are based on construction of a hyperplane that best divides a dataset into classes. One of the main advantages of the SVMs is that they are effective in cases where number of dimensions is greater than the number of samples. The dataset was split into training (70%) and test (30%) sets and fed into a SVM with a linear kernel. Standard metrics such as accuracy, precision, and recall were calculated to evaluate the performance of the models. Due to the study’s exploratory nature, the classification was repeated five times, and average values were reported. In the case of the binary problem, to assess whether the system was able to classify “unknown diseases” correctly, each of the syndromes was removed from the training set. Next, the classification testing was performed only on the patients with the specific syndrome that was never presented to the classifier.

3.3. Geometric Analyses

The 3D Face Alignment Network detected 68 3D landmark points from 2D images [22]. The algorithm is based on a neural network that was trained on a large dataset of 2D face images and annotated 3D coordinates of landmark points. For each analyzed 2D image, the network output consisted of 68 coordinates in three dimensions aligned in a unified coordinate system. A total of 16 landmark points and 11 distances between them were selected to represent the main face areas analyzed during dysmorphology assessment (Figure 1). For selected point pairs in 3D space, the length of a line segment between the two points was calculated for every image in the dataset. The distances are expressed in the units of the coordinate system. In the case of symmetric distances, an average was calculated. Finally, the distances were used to train a support vector machine-based classifier in the two scenarios described in the face classification section.

4. Results

The results of automatic anthropometric measurements based on localization of 3D landmark points on the face (Figure 2) can be found in Table 1. The points were selected based on low standard deviation, and these reflected the main areas of the face analyzed during physical examination. Automated anthropometric measurements detected several characteristic features of genetic syndromes. For example, hypertelorism, an abnormally large distance between the eyes, was reflected by the increased distance between the corners of the eyes in Crouzon syndrome, Mowat–Wilson syndrome, Noonan syndrome, or Wideman–Steinert syndrome. In contrast, in Fragile X syndrome, this distance was smaller than in the controls, indicating hypotelorism, according to clinical genetic textbooks. Similarly, a short nose was present in Cornelia de Lange and Down syndrome. However, the performance of a support vector machine trained with these measurements was unsatisfactory (multiclass task accuracy 56%, binary task classification accuracy 67%).
The results of the multiclass classification system based on deep learning face recognition models are shown in Table 2. From the seven models tested on the dataset, four achieved an accuracy of more than 70%. The best result of 84% accuracy was obtained with a classification system based on the MTCNN detector and SVM trained with embeddings generated by the ArcFace model.
In binary task classification into disease and control groups, SVM based on MTCNN detector and DeepFace embeddings was the most successful, with an accuracy of 96% (Table 3). In addition, the classifier performed well in classifying as abnormal patients with diseases that were not presented during training (Table 4, Figure 3).

5. Discussion

We have evaluated several approaches to construct a classifying system for detecting genetic disease from images of the face. There were significant differences between models generating face embeddings in our experiments. We achieved the best results in multiclass problems with ArcFace. DeepFace gave the best results for the two-class classification. The best accuracy of a multiclass classifier detecting one of 15 genetic syndromes was 84%. The two-class classifier, intended to detect the presence or absence of genetic disease, had an accuracy of 96% in our experiments. More importantly, the binary classifier detected disease features in patients with diseases that were not present in the training dataset. Thus, the classifier was able to generalize differences between patients and controls, and detect abnormalities without the need for information about the specific disorder. This indicates that the system does not have to be trained with all genetic diseases to detect ‘genetic’ features of a face and, therefore, may be used as a first-line screening tool.
A machine learning face recognition system outperformed a simple approach based on our experiments’ geometric characteristics of face landmark points. The clinical examination performed by a trained dysmorphologist is based on individual experience [2]. A trained eye detects single distances between points on the face and relations between them. Moreover, automated measurements from pictures taken in uncontrolled conditions require normalization by one of the measurements, affecting the natural geometry of the face. The images used in the study were collected from many sources, with a variable degree of head rotation and various facial expressions, which could affect the landmark detection and measurements [8]. Machine learning-based face recognition models were trained using millions of unstandardized images. They are applied in face identification that requires extraction of many interrelated features, which resembles dysmorphic evaluation. Therefore, these models are more suitable as computer aided clinical face analysis tools.
Face2Gene is a state-of-the-art tool that clinicians use in many countries. The solution was built based on a database of 26,000, including patients with more than 200 syndromes. The application was designed to point out the most probable diseases, with a score labeled as high, medium, or low [16]. However, the evaluation requires manual submission of pictures; therefore, it is difficult to use in a comparative way. In addition, Face2Gene attempts to solve a more challenging task due to the number of symptoms. We have submitted to Face2Gene a random sample of 200 control pictures from our dataset, and 69% had no diagnostic high or medium suggestions. However, 31% of control subjects would be erroneously assigned to the disease groups. This indicates that Face2Gene is designed for a different task and, without modification, might not perform well as a screening tool.
A single dysmorphic feature is usually not unique for a specific disorder. For example, hypertelorism is not only present in Coffin–Lowry syndrome but also, among others, in Crouzon syndrome. This overlap in dysmorphic features most probably enables the correct classification of patients with a disease unknown to the system in the binary task. The system analyzes only face image information without knowledge about symptoms, other test results, or family history. Seventy-two percent of rare diseases have a genetic background, while others result from infections, allergies, and environmental causes. One of the limitations of our study is the lack of training on images of patients with non-genetic etiologies. There are usually no dysmorphic features in these cases, and therefore, facial images are not published and available. Fetal alcohol syndrome (FAS), caused by an environmental factor, is associated with specific dysmorphic changes, and represents an example of a disease that could be misclassified as a genetic syndrome. Therefore, cases of such diseases should be included in the final training dataset before clinical application. However, it is essential to note that in the case of a screening system, the number of patients with genetic diseases classified as control is more concerning than a false positive misclassification.
Ethnic background and age are essential factors in face recognition and genetic syndrome classification. Ethnicity has been shown to have a high impact on the performance of Face2Gene in the classification of Down syndrome patients of Caucasian and African origin [23]. There is a bias in patients’ ethnicity described in the literature, and, therefore, the availability of face images. The screening system described in this paper should be constructed using images reflecting the local population and its ethnic structure. In many clinical genetic units, face pictures are routinely taken and could be used as a data source. Information concerning the exact age at the picture taking could improve the performance of dysmorphic features analysis. However, for many pictures from the literature, information about the age was not available. We, therefore, limited the analyses to pictures arbitrarily labeled as children or young adolescents.
For many countries, the time needed for diagnosis is counted in years, particularly for many genetic diseases. This is due to the complex nature of genetic disorders, the high cost of laboratory testing, and limited human resources, i.e., trained clinical geneticists. Moreover, most of the patients that have genetic tests remain undiagnosed despite significant progress in the field. For example, a majority of patients with isolated congenital heart disease will have normal results in genetic tests. However, if the heart disease is not isolated and there are additional symptoms or dysmorphic features, the probability of a positive test is significantly increased by ~30% [4,5,24]. The so-called ‘diagnostic odyssey’ of a patient with suspicion of a genetic disease usually starts from an appointment with a general practitioner or pediatrician. Then, the doctor has to decide whether to refer the patient to a clinical genetic unit. In many countries, the resources in the field of clinical genetics are minimal. A sensitive screening system would enable the selection of patients for genetic referral and reduce the patient’s time to wait for the diagnosis. The number of patients with a genetic syndrome classified as controls was ~4% in the analysis based on the DeepFace model. For the misclassified patient, that would mean a potential delay in diagnosis. The interpretation of this percentage depends on the capacity of clinical genetic units in a given country.
Our analyses were based on a relatively small dataset of 15 disorders out of thousands of genetic syndromes. However, the same approach with an extended dataset could have clinical application. Furthermore, once the training of the deep learning algorithm is performed, the testing of individual images is fast, and can be performed on a desktop computer.
It is important to remember that there are more than 7500 genetic diseases, and probably still hundreds to be discovered. Therefore, the screening system has to detect abnormalities associated with diseases that were unknown at the time of its construction. Our results suggest that such a system is possible. However, a more extensive study, including a larger dataset with more diseases, is needed.

Author Contributions

Conceptualization, M.G. and K.S.; methodology, M.G. and K.S.; software, M.G.; validation, M.G.; formal analysis, M.G.; investigation, M.G.; resources, M.G.; data curation, M.G.; writing—original draft preparation, M.G.; writing—review and editing, M.G. and K.S.; visualization, M.G.; supervision, K.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rodwell, C.; Aymé, S. (Eds.) 2014 Report on the State of the Art of Rare Disease Activities in Europe. July 2014. [Google Scholar]
  2. Nguengang Wakap, S.; Lambert, D.M.; Olry, A.; Rodwell, C.; Gueydan, C.; Lanneau, V.; Murphy, D.; Le Cam, Y.; Rath, A. Estimating cumulative point prevalence of rare diseases: Analysis of the Orphanet database. Eur. J. Hum. Genet. EJHG 2020, 28, 165–173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Reardon, W.; Donnai, D. Dysmorphology demystified. Arch. Dis. Child. Fetal Neonatal Ed. 2007, 92, F225–F229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Jin, S.C.; Homsy, J.; Zaidi, S.; Lu, Q.; Morton, S.; DePalma, S.R.; Zeng, X.; Qi, H.; Chang, W.; Sierant, M.C.; et al. Contribution of rare inherited and de novo variants in 2,871 congenital heart disease probands. Nat. Genet. 2017, 49, 1593–1601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Blue, G.M.; Kirk, E.P.; Giannoulatou, E.; Sholler, G.F.; Dunwoodie, S.L.; Harvey, R.P.; Winlaw, D.S. Advances in the genetics of congenital heart disease: A clinician’s guide. J. Am. Coll. Cardiol. 2017, 69, 859–870. [Google Scholar] [CrossRef] [PubMed]
  6. Herpers, R.; Rodax, H.; Sommer, G. Neural network identifies faces with morphological syndromes. Artif. Intell. Med. 1993, 10, 481–485. [Google Scholar]
  7. Loos, H.S.; Wieczorek, D.; Würtz, R.P.; von der Malsburg, C.; Horsthemke, B. Computer-based recognition of dysmorphic faces. Eur. J. Hum. Genet. 2003, 11, 555–560. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Boehringer, S.; Guenther, M.; Sinigerova, S.; Wurtz, R.P.; Horsthemke, B.; Wieczorek, D. Automated syndrome detection in a set of clinical facial photographs. Am. J. Med. Genet. 2011, 155, 2161–2169. [Google Scholar] [CrossRef] [PubMed]
  9. Balliu, B.; Würtz, R.P.; Horsthemke, B.; Wieczorek, D.; Böhringer, S. Classification and Visualization Based on Derived Image Features: Application to Genetic Syndromes. PLoS ONE 2014, 9, e109033. [Google Scholar] [CrossRef] [PubMed]
  10. Cristinacce, D.; Cootes, T.F. Feature detection and tracking with constrained local models. BMVC 2006, 1, 3. [Google Scholar] [CrossRef] [Green Version]
  11. Zhao, Q.; Okada, K.; Rosenbaum, K.; Kehoe, L.; Zand, D.J.; Sze, R.; Linguraru, M.G. Digital facial dysmorphology for genetic screening: Hierarchical constrained local model using ICA. Med Image Anal. 2014, 18, 699–710. [Google Scholar] [CrossRef] [PubMed]
  12. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  13. Kruszka, P.; Porras, A.R.; Addissie, Y.A.; Moresco, A.; Medrano, S.; Mok, G.T.; Leung, G.K.C.; Tekendo-Ngongang, C.; Uwineza, A.; Thong, M.-K.; et al. Noonan syndrome in diverse populations. Am. J. Med Genet. Part A 2017, 173, 2323–2334. [Google Scholar] [CrossRef] [PubMed]
  14. Basel-Vanagaite, L.; Wolf, L.; Orin, M.; Larizza, L.; Gervasini, C.; Krantz, I.; Deardoff, M. Recognition of the Cornelia de Lange syndrome phenotype with facial dysmorphology novel analysis. Clin. Genet. 2016, 89, 557–563. [Google Scholar] [CrossRef] [PubMed]
  15. Ferry, Q.; Steinberg, J.; Webber, C.; FitzPatrick, D.R.; Ponting, C.P.; Zisserman, A.; Nellåker, C. Diagnostically relevant facial gestalt information from ordinary photos. eLife 2014, 3, e02020. [Google Scholar] [CrossRef] [PubMed]
  16. Gurovich, Y.; Hanani, Y.; Bar, O.; Nadav, G.; Fleischer, N.; Gelbman, D.; Basel-Salmon, L.; Krawitz, P.M.; Kamphausen, S.B.; Zenker, M.; et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat. Med. 2019, 25, 60–64. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, Z.; Song, Y.; Qi, H. Age Progression/Regression by Conditional Adversarial Autoencoder. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4352–4360. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, K.; Zhang, Z.; Li, Z.; Qiao, Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal. Process. Lett. 2016, 23, 1499–1503. [Google Scholar] [CrossRef] [Green Version]
  19. Shu, C.; Ding, X.; Fang, C. Histogram of the oriented gradient for face recognition. Tsinghua Sci. Technol. 2011, 16, 216–224. [Google Scholar] [CrossRef]
  20. King, D.E. Dlib-ml: A machine learning toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  21. Serengil, S.I.; Ozpinar, A. “LightFace: A Hybrid Deep Face Recognition Framework. In ” In Proceedings of the Innovations in Intelligent Systems and Applications Conference (ASYU), İstanbul, Turkey, 15–17 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
  22. Bulat, A.; Tzimiropoulos, G. How far are we from solving the 2d & 3d face alignment problem? In (and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1021–1030. [Google Scholar] [CrossRef] [Green Version]
  23. Lumaka, A.; Cosemans, N.; Lulebo Mampasi, A.; Mubungu, G.; Mvuama, N.; Lubala, T.; Mbuyi-Musanzayi, S.; Breckpot, J.; Holvoet, M.; de Ravel, T.; et al. Facial dysmorphism is influenced by ethnic background of the patient and of the evaluator. Clin. Genet. 2017, 92, 166–171. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Sifrim, A.; Hitz, M.P.; Wilsdon, A.; Breckpot, J.; Turki, S.H.; Thienpont, B.; McRae, J.; Fitzgerald, T.W.; Singh, T.; Swaminathan, G.J.; et al. Distinct genetic architectures for syndromic and nonsyndromic congenital heart defects identified by exome sequencing. Nat. Genet. 2016, 48, 1060–1065. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Diagram of the system architecture.
Figure 1. Diagram of the system architecture.
Sensors 21 06595 g001
Figure 2. Localization of landmark points that were used for automated anthropometric measurements.
Figure 2. Localization of landmark points that were used for automated anthropometric measurements.
Sensors 21 06595 g002
Figure 3. Receiver operating characteristic curve for the two-class classifier based on the DeepFace model.
Figure 3. Receiver operating characteristic curve for the two-class classifier based on the DeepFace model.
Sensors 21 06595 g003
Table 1. Automated 3D anthropometric measurements taken from 2D images in a control group and 15 genetic syndromes. The average distances between points that are labeled in Figure 1 are given.
Table 1. Automated 3D anthropometric measurements taken from 2D images in a control group and 15 genetic syndromes. The average distances between points that are labeled in Figure 1 are given.
Patient Group Distance between Points (Labeled in Figure 2)
1–23–44–55–67–89–1011–1213–1412–1515–167–16
Control32.3127.6617.5717.5726.1417.468.9439.214.2723.9975.71
22q11 microdeletion syndrome32.427.9717.4717.4726.3316.91936.1212.225.3474.71
Angelman syndrome31.826.7616.8816.8825.217.728.5939.271823.6276.69
Coffin–Lowry syndrome33.1730.3717.6117.6126.0117.899.7838.9324.0122.4183.27
Cornelia de Lange syndrome32.2128.5217.2517.2523.4916.610.7336.311.7322.9371.67
Crouzon syndrome33.5230.3318.418.425.4516.928.2635.915.9624.2975.28
Down syndrome31.9327.6417.1117.1124.0417.198.7837.8414.7422.3571.78
Fragile X syndrome32.6427.4517.817.827.4417.699.5539.9419.2623.6782.06
KBG syndrome32.328.3117.4617.4625.1416.6910.3136.2313.9824.6876.23
Kabuki syndrome32.8328.7617.8217.8225.3916.7710.0934.4613.6724.0374.45
Mowat–Wilson syndrome32.7428.5317.7217.7227.1617.768.8239.7416.1824.9179.38
Noonan syndrome32.8728.7117.6517.6525.7116.910.2535.3414.4624.4776.41
Pitt–Hopkins syndrome32.3427.6717.3517.3525.9317.768.3438.7717.4723.4476.67
Smith–Lemli–Opitz syndrome32.2427.8917.1617.1624.3817.389.4338.0920.5622.4778.14
Wideman–Steinert syndrome32.7229.0717.1517.1526.4516.969.436.8915.9523.6977.52
Williams syndrome31.9827.7717.5217.5224.5217.2810.7239.316.6922.1276.07
Table 2. Result of multiclass classification for 4 most accurate face recognition models.
Table 2. Result of multiclass classification for 4 most accurate face recognition models.
Patient GroupFace Recognition Model
ArcFaceFaceNetDeepFaceFaceNet512
PrecisionRecallPrecisionRecallPrecisionRecallPrecisionRecall
22q11 Microdeletion syndrome0.6900.7550.5450.5880.4840.5800.4930.517
Angelman syndrome0.6570.6800.3960.5760.4670.6690.4190.525
Coffin–Lowry syndrome0.8110.6340.7420.6630.4170.3430.5370.761
Control0.9100.9560.8900.8980.9440.9790.8590.883
Cornelia de Lange syndrome0.9490.8110.5930.6490.7010.3940.8560.750
Crouzon syndrome0.9070.8380.7830.9150.7480.4590.8710.866
Down syndrome0.9110.8190.7550.7560.5370.6320.7700.699
Fragile X syndrome0.5340.3130.3970.5130.4360.3690.2030.237
KBG syndrome0.7860.7140.6690.6100.5990.7080.5800.560
Kabuki syndrome0.7870.6310.5640.4740.4950.3910.7890.576
Mowat–Wilson syndrome0.9020.8390.7770.6650.5010.3260.7770.793
Noonan syndrome0.5620.5430.5290.4530.3620.3530.5580.494
Pitt–Hopkins syndrome0.7830.6100.5680.3730.2840.2250.5440.383
Smith–Lemli–Opitz syndrome0.6830.7110.5620.5720.5990.5170.6500.619
Wideman–Steinert syndrome0.7500.8130.7740.6610.6010.4830.7880.586
Williams syndrome0.8890.7280.6940.5420.5330.2180.6920.591
Accuracy 0.846 0.762 0.757 0.746
Table 3. Result of two-class classification for the 4 most accurate models.
Table 3. Result of two-class classification for the 4 most accurate models.
Patient GroupFace Recognition Model
DeepFaceArcFaceDeepIDFaceNet
PrecisionRecallPrecisionRecallPrecisionRecallPrecisionRecall
Control0.9610.9710.9220.9220.9400.9210.8940.915
Disease0.9620.9490.9070.9080.9000.9220.8940.867
Accuracy0.9610.9150.9220.894
False positives0.040.050.080.07
False negatives0.040.090.090.1
Negative redictive value0.9570.9150.9140.906
Table 4. Two class classification results for test sets of patients with diseases that were removed from the training data.
Table 4. Two class classification results for test sets of patients with diseases that were removed from the training data.
Disease Removed from TrainingDeepFace Using 70% of the Dataset
without Given Disease for Training
DeepFace Using 100% of the Dataset
without Given Disease for Training
22q11 microdeletion syndrome0.700.88
Angelman syndrome0.850.94
KBG syndrome0.840.98
Down syndrome0.920.97
Crouzon syndrome0.910.88
Cornelia de Lange syndrome0.850.91
Noonan syndrome0.830.99
Williams syndrome0.911.00
Fragile X syndrome0.651.00
Kabuki syndrome0.900.94
Mowat-Wilson syndrome0.920.94
Coffin–Lowry syndrome0.961.00
Smith–Lemli–Opitz syndrome0.790.95
Pitt–Hopkins syndrome0.790.91
Wideman–Steinert syndrome0.840.97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Geremek, M.; Szklanny, K. Deep Learning-Based Analysis of Face Images as a Screening Tool for Genetic Syndromes. Sensors 2021, 21, 6595. https://doi.org/10.3390/s21196595

AMA Style

Geremek M, Szklanny K. Deep Learning-Based Analysis of Face Images as a Screening Tool for Genetic Syndromes. Sensors. 2021; 21(19):6595. https://doi.org/10.3390/s21196595

Chicago/Turabian Style

Geremek, Maciej, and Krzysztof Szklanny. 2021. "Deep Learning-Based Analysis of Face Images as a Screening Tool for Genetic Syndromes" Sensors 21, no. 19: 6595. https://doi.org/10.3390/s21196595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop