Next Article in Journal
Uncovering Disease-Related Polymorphisms through Correlations between SNP Frequencies, Population and Epidemiological Data
Next Article in Special Issue
Detection of Myocardial Infarction Using Hybrid Models of Convolutional Neural Network and Recurrent Neural Network
Previous Article in Journal
An IoT-Based Automatic and Continuous Urine Measurement System
Previous Article in Special Issue
AlphaFold2 Update and Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Facial Palsy, Age and Gender Detection Using a Raspberry Pi

1
Electrical Engineering Technical College, Middle Technical University, Baghdad 10022, Iraq
2
School of Engineering, University of South Australia, Adelaide, SA 5000, Australia
3
Technical Institute for Administration, Middle Technical University, Baghdad 10074, Iraq
*
Authors to whom correspondence should be addressed.
BioMedInformatics 2023, 3(2), 455-466; https://doi.org/10.3390/biomedinformatics3020031
Submission received: 13 April 2023 / Revised: 29 May 2023 / Accepted: 8 June 2023 / Published: 13 June 2023

Abstract

:
Facial palsy (FP) is a neurological disorder that affects the facial nerve, specifically the seventh nerve, resulting in the patient losing control of the facial muscles on one side of the face. It is an annoying condition that can occur in both children and adults, regardless of gender. Diagnosis by visual examination, based on differences in the sides of the face, can be prone to errors and inaccuracies. The detection of FP using artificial intelligence through computer vision systems has become increasingly important. Deep learning is the best solution for detecting FP in real-time with high accuracy, saving patients time, effort, and cost. Therefore, this work proposes a real-time detection system for FP, and for determining the patient’s gender and age, using a Raspberry Pi device with a digital camera and a deep learning algorithm. The solution facilitates the diagnosis process for both the doctor and the patient, and it could be part of a medical assessment activity. This study used a dataset of 20,600 images, containing 19,000 normal images and 1600 FP images, to achieve an accuracy of 98%. Thus, the proposed system is a highly accurate and capable medical diagnostic tool for detecting FP.

1. Introduction

Facial palsy is a facial nerve disease that occurs on one side of the face and leads to loss of control of voluntary muscle movement [1]. There are a range of symptoms associated with FP, including taste and hearing problems, pain around the face and in the ears, sagging eyelids, and dry eyes. FP affects one out of every 60 people [2,3]. The people most at risk of developing FP are pregnant women, diabetics, and those with a family history, with an incidence of around 10% [4]. The percentage of infection on the left side of the face is higher than on the right side, and the percentage of afflicted males is lower than that of females [5]. Statistically, 6.1 out of every 100,000 people with FP are children, with an average age of 1 to 15 years [2]. FP is most common between the ages of 30 and 45 years, and it annually affects 37.7 out of every 100,000 people in the UK [6]. The traditional method still prevails in the medical diagnosis of FP, which depends on the doctor’s vision and judgment and requires the patient to spend time, effort, and money. FP is annoying and uncomfortable because of the deformation that occurs to the face and the accompanying pain and other symptoms. Therefore, an automatic system should be developed to detect FP accurately and fast.
Machine learning is a branch of artificial intelligence that makes predictions about data through model learning, allowing computers to gather insights and learn about data [7]. A convolutional neural network (CNN) performs complex pattern and image recognition tasks and is a form of artificial intelligence [8]. CNN differs from the mainstream data feature extraction methods because it can extract features using convolutional structures. Recognizing objects through computer vision is the operation that CNN was created to perform [9,10,11]. Deep learning is one of the most advanced solutions to the problem of face detection and recognition. It solves the problems of digital face image processing applications, such as the problem of image colorization, detection, and classification [3,12,13]. Computer vision is a method of collecting and interpreting visual information by a computer, essentially teaching a computer to see things [14].
In recent years, after the shift in FP detection technology from traditional methods to automatic detection methods by means of machine learning and computer vision, several methods of FP detection appeared. For example, a study by Dong et al. [15] measured the degree of FP by identifying the difference between faces and proposed a quantitative estimation method to detect the main points by K-MEANS clustering and using the Susan edge algorithm to identify the salient points of the edges and detect facial features. Another study by Azoulay et al. [16] worked on detecting vital signs and diagnosing FP using a mobile application with a user interface. The data used were 14 people with FP and 31 healthy people, and the accuracy of the diagnosis was 95.5%. In another study by Haase et al. [17], an analytical system was used to analyze the two sides of the face in detail, which is the coding system (FACS) using a dataset of 299 people with FP and 28 healthy people. However, in the training process, data from healthy subjects spoofed with facial paralysis were used. The analysis and scan duration was 108 ms per image. A study by Ngo et al. [18] proposed a method of limited-orientation modified circular Gabor filters (LO-MCGFs) to perform quantitative analysis of FP where a database of 85 subjects (75 patients + 10 healthy volunteers) was used to achieve 81.2% accuracy by utilizing a method based on frequency to preprocess images prior to extracting features. Wang et al. [19] proposed a technique to assess the level of FP, taking into account both the fixed facial unevenness and the variable transformative factors. A database with 62 patients, with 33 females and 29 males, using the suggested approach that integrates both static and dynamic quantification could achieve a recognition rate of 97.56%. Another study by Codari et al. [20] proposed a method of facial thirds–based evaluation to measure the degree of facial asymmetry using a stereophotogrammetric device. The data used were from 40 healthy people and 30 people with FP. A hybrid method proposed by Storey et al. [21] created a three-dimensional image automatically from a two-dimensional image using a computer vision system based on 3D CNN to detect facial features, which achieved high accuracy in collecting facial features. However, there was an error that affected the accuracy of the system in collecting mouth features due to asynchronous movements. The study used two sets of data, and the F1 score was 88% for FP and reduced to 82% for mouth movement. Another study by Storey et al. [22] detected facial features and diagnosed FP by training a deep neural network using mixed training data of healthy and facial palsy subjects based on a binary sequential process proposed in [23], where the error classification ranged between 8.72% and 18.88%, and the accuracy of the evaluation ranged from 82% to 95.60%. Another study by Barbosa et al. [24] used training data consisting of 440 images of FP patients to extract features of FP by a set of regression trees for regularized logistic regression detection and iris detection with acceptable accuracy. Jiang et al. [25] used machine learning methods (K-NN, SVM, and NN) to classify the injury degree of FP by computational image analysis using a data set of 80 participants, with an average of 100 images for each person. The accuracy of the system ranged between 87.22% and 95.69%. Dell’Olio et al. [26] used the FaraPy system to detect FP in real time with different facial expressions for six healthy people, and with the lowest percentage of losses obtained an acceptable accuracy. To detect 68 facial features and diagnose FP, a complete CNN system by Liu et al. [27] was trained using a dataset of healthy subjects with different facial expressions and subjects with FP, and it extracted the facial features for classification. However, a major limitation was the length of execution time. A study by Nguyen et al. [28] detected different facial expressions based on three dimensional point cloud data and deep engineering learning technology to obtain a detection accuracy from 69.01% to 85.85%. Recently, Dominguez et al. [29] used 480 images of data, facial features, and their binary classifier to detect FP disease, where classification accuracy ranged between 94.06% and 97.22%. Another study by Estomba et al. [7] worked on predicting facial nerve palsy through the K-nearest neighbor algorithm using 356 patients’ data, and the accuracy of the system performance exceeded 0.9. A study by Amsalam et al. [3] using a technique for detecting FP using a computer vision system was introduced. This method utilized deep learning through CNN and a Python program and involved the analysis of 570 images, including 200 images of individuals with facial palsy. The study included 10 participants, comprising three males and seven females with varying degrees of FP and injury on different sides, aged between 15 and 70 years. The method demonstrated short processing and detection time with 98% accuracy. However, it was not considered to be user-friendly due to its lack of real-time functionality. In another study by Zhang et al. [30] a system that automatically evaluates faces called AFES (Automatic Facial Evaluation System) was suggested with customization options; during the study, a total of 92 individuals with facial palsy were enrolled and underwent evaluations. The evaluations included both subjective manual assessments using scales such as mHBGS and mSFGS, as well as automatic objective evaluations utilizing AFES. The AFES evaluations included aHBGS, aSFGS, and assessments of specific facial features. These evaluations were conducted at the beginning of the study and repeated after two weeks. AFES’s algorithm was developed by training and testing it on video frames from over 100 patients; 80% of the frames were used as training sets, while the remaining 20% were used as testing sets. AFES may be viewed as a feasible approach for conducting a precise and dependable assessment of patients who have facial palsy. However, previous methods for diagnosing FP or detecting facial features had problems, including considering older methods [15,16] and often using training data from spoofed FP [23], Or the dataset of patients with FP may be small [26]. In addition to insufficient or non-existent quantitative results and low accuracy in many cases [17,31]. Therefore, this study proposes a high-accuracy diagnostic system to detect FP and the patient’s gender and age in real-time by using deep learning algorithms based on CNN.
The remainder of this paper is as follows: Section 2 describes the materials and methods of the study, including research ethics and participants, experimental setup, system design, and Features Extraction. Section 3 includes the performance metrics of the proposed system. Section 4 presents the results and analysis of the proposed system performed on human participants with discussion. Finally, Section 5 concludes the paper.

2. Materials and Methods

2.1. Research Ethics and Participants

In this study, the guidelines and research ethics issued by the Declaration of Helsinki in 1964 in Finland were followed. Approval from the specialized research committee in the Department of Research and Knowledge in Dhi Qar Health of the Iraqi Ministry of Health was obtained to conduct the research, according to protocol No. (363/2022). Prior to conducting the research, the consent of all participants was obtained to collect samples, and they were informed of the protection of their data.
The number of participants was 20 male and female individuals with different degrees of injuries. With a difference in the side of the injury, the average age ranges from 10 to 65 years. Samples were collected from the Department of Physiotherapy at Al-Rifai General Hospital. The remaining data were obtained from the Kaggle website [32], including UTKFace dataset [33] for normal people and FER-2013 dataset [34] for facial palsy. The dataset collected was 20,600 images consisting of 19,000 normal images and 1600 palsy images.

2.2. Experimental Setup

The experimental setup is capable of extracting features from the face and diagnosing FP, as shown in Figure 1. The patient sits in front of the camera at a distance of 0.5 to 2 m while the microcomputer (a Raspberry Pi) works to detect facial palsy and diagnose the condition in real-time, indicating whether the person is healthy or has FP, and determining the affected side of the face, as well as revealing the person’s gender and age. The process of detecting facial features and diagnosing FP is conducted after training the system with three types of data, including right palsy data, left palsy data, and data from healthy individuals. The data in each case was divided into two parts: 80% training data and 20% test data, comprising 19,000 normal images and 1600 images of FP. The diagnostic process was carried out using the Python programming language (Version 3.9), and the code was executed on Anaconda (version 2.3.2) on the computer after installing image libraries and object detection libraries (OpenCV, dlib, TensorFlow, Keras, and NumP).

2.3. Hardware

The proposed system’s practical part is shown in Figure 2, which comprises a Raspberry Pi, a digital camera, and a display screen mounted on a tripod.
Figure 2 shows the whole proposed system where the patient sits in front of the camera, the captured image is displayed on the screen, and the detection window appears around the face to complete the diagnosis process.
The Raspberry Pi 4 Model B 2G is a small single-board computer developed by the Raspberry Pi Foundation. The dimensions of the Raspberry Pi 4 Model B 2G are 88 × 58 × 19.5 mm (3.46 × 2.28 × 0.77 inches), which makes it small and portable. This makes it easy to use in projects and applications that require a small footprint, such as home automation, media centers, and portable gaming devices. It is important to note that these dimensions are for the board itself and do not include any additional components or peripherals that may be required for a specific application. Raspberry Pi 4 Model B 2G is not a standalone device; it needs a power supply, SD card, keyboard, mouse, display, and other accessories to function effectively.
The display was a Wave Shares 5in HDMI Display Capacitive Touch Screen that with resolution of 800 × 480 pixels. The physical size of the screen is measured diagonally from corner to corner. The aspect ratio of the screen is 800 × 480, or approximately 1.67:1.
The camera used was a Kisonli (NO: U-227), with digital zoom (f = 3.85 mm) and 10 megapixels. After the code was installed on the Raspberry Pi device, the device was connected to a digital camera to photograph the patient and a screen to display the patient’s image, and through it, the Raspberry Pi device was controlled. However, some additional considerations were attended to, such as directing the patient towards the camera and ensuring good lighting in the place of imaging to facilitate the process of detection and diagnosis of FP by the proposed system.

2.4. System Design

The block diagram of the proposed diagnostic system based on CNN is shown in Figure 3.

2.5. Features Extraction

After the collected data is saved to a computer file, the FP classification process begins. The process of classifying facial features goes through three successive stages: face detection, extraction of facial features, and classification of expressions. The image was processed before detecting the face and extracting features, as it worked to detect the main features of the face, such as eyebrows, eyes, and mouth [35]. The main goal of feature extraction is to obtain the most relevant information from the original data and represent that information in a reduced dimensional space [36]. The proposed diagnostic system extracted 68 facial features using Haar Cascades, which is an popular technique used to quickly detect objects, including human faces [37]. Facial features were extracted in the proposed diagnostic system using rectangles to cover each facial feature. The rectangle was divided into two halves, one white and one black, due to differences in contrast for each area. The two regions were collected, and the result was extracted by the subtraction method. The closer the result of the subtraction is to 255 or one, the stronger the indicator of a particular trait. To reduce the processing time, the image was converted into an integrated image, and a detection process was applied to it. The capacity and size of the window were adjusted depending on the size of the feature, and the percentage likelihood of a person’s presence increased with the increase in Haar’s features. Facial features, such as eyes, eyebrows, facial edges, mouth, and nose, were detected using a set of packages in a Python program. Haar Cascade searches for similarities between images to match and detect differences between them, which increases detection accuracy. Classification is based on the presence or absence of these features and is classified as true or false.

3. Evaluation Metrics

Diagnostic results can be divided into 4 cases depending on the combination of actual and expected categories, namely: true negative (TN), true positive (TP), false negative (FN), and false positive (FP). Depending on the confusion matrix, the diagnostic ability of the proposed diagnostic system can be evaluated through five variables: sensitivity, specificity, precision, accuracy, Matthews’ correlation coefficient, and error rate.
These variables can be defined as follows [38,39,40,41]:
S e n s i t i v i t y = T P / ( T P + F N )
S p e c i f i c i t y = T N / ( T N + F P )
P r e c i s i o n = T P / ( T P + F P )
A c c u r a c y = ( T P + T N ) / ( T P + T N + F P + F N )
M C C = ( T P × T N F P × F N ) / ( ( T P + F N ) ( T N + F P ) ( T P + F P ) ( T N + F N ) )
E r r o r r a t e = ( F P + F N ) / ( T P + F P + F N + T N )
The confusion matrix is a graph that provides a complete visualization of the performance of a supervised deep-learning algorithm. Each row in the matrix represents the actual class states, and each column represents the expected class states. Errors can be easily calculated and visualized by determining the values that appear outside the vertical diagonal of the matrix, while valid values can be computed from the values that appear on the diagonals of the table [42]. The confusion matrix has been used in machine learning to explain and evaluate models’ behavior for the supervised classification [43].

4. Experimental Results and Discussion

To ensure that the proposed diagnostic system achieved the desired goal and the validity of the results, we compared it with the data of 20 patients diagnosed by a physician. After training the system on the training data, the system’s diagnostic accuracy was 98% and 99% for the case study data. The training set accounted for 80% of the data, while 20% was reserved for testing. The training time depends on the size of the data, The more data, the longer the training time, and vice versa. It also depends on the power of the computer processor used. For the proposed diagnostic system data, the training period was only five hours for one time, and the diagnostic time was only a few seconds. The accuracy of the proposed diagnostic system can be clearly seen in Figure 4 and Figure 5, which show the results of the diagnosis. The system classified the condition of the person and detected the affected side of the face if the person had FP or not.
When the proposed system was applied for real-time diagnosis using the Raspberry Pi, it displayed the person’s condition, the afflicted side of the patient, their gender, and age, as shown in Figure 6.
The proposed system accurately diagnoses FP, identifies the affected side of the face (which was the right side), and can determine the patient’s gender (in this case, male, approximately 24 years old).
The proposed system can also diagnose the condition of a healthy person and predict their gender and age, as shown in Figure 7.
The proposed system can also diagnose multiple people simultaneously and detect their gender and age, as shown in Figure 8.
All the raw information for the prediction results of the model used in the dataset is in the confusion matrix. Figure 9 represents a triangular matrix where the rows represent the actual values, and the columns represent the predicted values of the model. It shows the true positive, false positive, true negative, and false negative values and their compatibility with the actual values in the stored data.
The confusion matrix is based on the classification of the test data, where the number of rows and columns in the matrix is equal to the size of the test data. Figure 10 shows the confusion matrix in detail, which was obtained after training the data for 100 epochs. The test data was divided into three categories: right palsy, left palsy, and normal. Therefore, the confusion matrix appeared as a triangular matrix with three columns and three rows. The error data represents the number of instances that were not recognized by the program, while the accuracy data indicates the instances that were recognized by the program. The true and predicted values were identified by the confusion matrix. The accuracy and effectiveness of the system increased as the prediction accuracy increased. When the system’s prediction results matched the stored data and their classification, the model could be adopted. The results indicated the acceptability and efficiency of the system in diagnosis, achieving an accuracy rate of 0.98 with a 0.02 error rate. The sensitivity, specificity, and precision of the system were 1, 0.8, and 0.97, respectively.
To increase accuracy and reduce system losses, it is necessary to standardize the size of images, expand them, and increase the amount of data used. The high accuracy of the proposed system, which is 98%, indicates that it is an acceptable diagnostic system with few errors. Typical results of the proposed system are shown in Figure 11.
System losses decrease with increasing accuracy because the relationship between them is inverse. The percentage of losses in the proposed system was 2% of the total data used in the training process. Figure 12 shows the percentage losses for training and validation in CNN training.
The distinction between the prior techniques and the proposed system’s approach can be clarified by examining the technology employed, the quantity of data utilized, and the system’s precision, as illustrated in Table 1 below.
The proposed system outperforms previous studies in many features, including large numbers of training data, real-time diagnosis, and high accuracy, in addition to real-time gender and age detection and use of a practical Raspberry Pi device.
Despite the desirable features and accuracy of the proposed system in diagnosing FP and detecting gender and age, it has some limitations in diagnosis. These include the variation in the shapes of people of the same age in relation to determining the age, the difficulty in diagnosing when the patient moves, and the difficulty in distinguishing if the person suffers from a facial deviation resulting from an accident or a deviation in the nose. Furthermore, there are other limitations, such as the challenge of collecting data for disease cases because patients do not want to be imaged because of the embarrassment caused by FP.

5. Conclusions

In this paper, a modern and highly accurate diagnostic system to automatically detect facial paralysis that affects the seventh nerve in the face was proposed by us. The proposed system is based on CNN and can diagnose FP with high accuracy, along with detecting the patient’s gender and age. The diagnostic accuracy of the proposed system reached 98%. It is suggested as an auxiliary medical diagnostic tool for doctors, nursing staff, and patients. The patient’s use of this system at home in the diagnostic process reduces embarrassment, effort, time, and cost. Further work is ongoing to develop the system to diagnose more conditions.

Author Contributions

Conceptualization, A.A.-N.; methodology, A.S.A., A.A.-N. and J.C.; software, A.S.A., A.A.-N. and A.Y.D.; validation, A.S.A., A.A.-N. and A.Y.D.; formal analysis, A.S.A. and A.Y.D.; investigation, A.S.A.; resources, A.S.A.; data curation, A.S.A.; writing—original draft preparation, A.S.A.; writing—review and editing, A.Y.D., A.A.-N. and J.C.; visualization, A.Y.D. and J.C.; supervision, A.A.-N. and A.Y.D.; project administration, A.A.-N. and J.C.; funding acquisition, A.A.-N. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Department of Research and Knowledge in Dhi Qar Health of the Iraqi Ministry of Health (protocol code 363/2022 and 1-12-2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barbosa, J.; Lee, K.; Lee, S.; Lodhi, B.; Cho, J.-G.; Seo, W.-K.; Kang, J. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier. BMC Med. Med. Imaging 2016, 16, 23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Baugh, R.F.; Basura, G.J.; Ishii, L.E.; Schwartz, S.R.; Drumheller, C.M.; Burkholder, R.; Deckard, N.A.; Dawson, C.; Driscoll, C.; Gillespie, M.B. Clinical practice guideline: Bell’s palsy. Otolaryngol. Head Neck Surg. 2013, 149, S1–S27. [Google Scholar] [PubMed]
  3. Amsalam, A.S.; Al-Naji, A.; Daeef, A.Y.; Chahl, J. Computer Vision System for Facial Palsy Detection. J. Tech. 2023, 5, 44–51. [Google Scholar] [CrossRef]
  4. Ahmed, A. When is facial paralysis Bell palsy? Current diagnosis and treatment. Cleve Clin. J. Med. 2005, 72, 398–401. [Google Scholar] [CrossRef] [PubMed]
  5. Movahedian, B.; Ghafoornia, M.; Saadatnia, M.; Falahzadeh, A.; Fateh, A. Epidemiology of Bell’s palsy in Isfahan, Iran. Neurosci. J. 2009, 14, 186–187. [Google Scholar]
  6. Szczepura, A.; Holliday, N.; Neville, C.; Johnson, K.; Khan, A.J.K.; Oxford, S.W.; Nduka, C. Raising the digital profile of facial palsy: National surveys of patients’ and clinicians’ experiences of changing UK treatment pathways and views on the future role of digital technology. J. Med. Internet Res. 2020, 22, e20406. [Google Scholar] [CrossRef]
  7. Chiesa-Estomba, C.M.; Echaniz, O.; Suarez, J.A.S.; González-García, J.A.; Larruscain, E.; Altuna, X.; Medela, A.; Graña, M. Machine learning models for predicting facial nerve palsy in parotid gland surgery for benign tumors. J. Surg. Res. 2021, 262, 57–64. [Google Scholar] [CrossRef]
  8. O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  9. Lindeberg, T. Scale invariant feature transform. DiVA 2012, 7, 10491. [Google Scholar] [CrossRef]
  10. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  11. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef]
  12. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  13. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Proceedings of the Science and Information Conference, Las Vegas, NV, USA, 25–26 April 2019; pp. 128–144. [Google Scholar]
  14. Soo, S. Object detection using Haar-cascade Classifier. Inst. Comput. Sci. Univ. Tartu 2014, 2, 1–12. [Google Scholar]
  15. Dong, J.; Ma, L.; Li, Q.; Wang, S.; Liu, L.-a.; Lin, Y.; Jian, M. An approach for quantitative evaluation of the degree of facial paralysis based on salient point detection. In Proceedings of the 2008 International Symposium on Intelligent Information Technology Application Workshops, Shanghai, China, 21–22 December 2008; pp. 483–486. [Google Scholar]
  16. Azoulay, O.; Ater, Y.; Gersi, L.; Glassner, Y.; Bryt, O.; Halperin, D. Mobile application for diagnosis of facial palsy. In Proceedings of the 2nd International Conference on Mobile and Information Technologies in Medicine, Prague, Czech Republic, 20 November 2014. [Google Scholar]
  17. Haase, D.; Minnigerode, L.; Volk, G.F.; Denzler, J.; Guntinas-Lichius, O. Automated and objective action coding of facial expressions in patients with acute facial palsy. Eur. Arch. Oto-Rhino-Laryngol. 2015, 272, 1259–1267. [Google Scholar] [CrossRef] [PubMed]
  18. Ngo, T.H.; Seo, M.; Matsushiro, N.; Xiong, W.; Chen, Y.-W. Quantitative analysis of facial paralysis based on limited-orientation modified circular Gabor filters. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 349–354. [Google Scholar]
  19. Wang, T.; Zhang, S.; Dong, J.; Liu, L.a.; Yu, H. Automatic evaluation of the degree of facial nerve paralysis. Multimed. Tools Appl. 2016, 75, 11893–11908. [Google Scholar] [CrossRef] [Green Version]
  20. Codari, M.; Pucciarelli, V.; Stangoni, F.; Zago, M.; Tarabbia, F.; Biglioli, F.; Sforza, C. Facial thirds–based evaluation of facial asymmetry using stereophotogrammetric devices: Application to facial palsy subjects. J. Cranio-Maxillofac. Surg. 2017, 45, 76–81. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Storey, G.; Jiang, R.; Bouridane, A. Role for 2D image generated 3D face models in the rehabilitation of facial palsy. Healthc. Technol. Lett. 2017, 4, 145–148. [Google Scholar] [CrossRef]
  22. Storey, G.; Jiang, R.; Keogh, S.; Bouridane, A.; Li, C.-T. 3DPalsyNet: A facial palsy grading and motion recognition framework using fully 3D convolutional neural networks. IEEE Access 2019, 7, 121655–121664. [Google Scholar] [CrossRef]
  23. Wang, T.; Zhang, S.; Liu, L.A.; Wu, G.; Dong, J. Automatic facial paralysis evaluation augmented by a cascaded encoder network structure. IEEE Access 2019, 7, 135621–135631. [Google Scholar] [CrossRef]
  24. Barbosa, J.; Seo, W.-K.; Kang, J. paraFaceTest: An ensemble of regression tree-based facial features extraction for efficient facial paralysis classification. BMC Med. Imaging 2019, 19, 30. [Google Scholar] [CrossRef] [Green Version]
  25. Jiang, C.; Wu, J.; Zhong, W.; Wei, M.; Tong, J.; Yu, H.; Wang, L. Automatic facial paralysis assessment via computational image analysis. J. Healthc. Eng. 2020, 2020, 2398542. [Google Scholar] [CrossRef]
  26. Barrios Dell’Olio, G.; Sra, M. FaraPy: An Augmented Reality Feedback System for Facial Paralysis using Action Unit Intensity Estimation. In Proceedings of the the 34th Annual ACM Symposium on User Interface Software and Technology, Online, 10–14 October 2021; pp. 1027–1038. [Google Scholar]
  27. Liu, X.; Wang, Y.; Luan, J. Facial paralysis detection in infrared thermal images using asymmetry analysis of temperature and texture features. Diagnostics 2021, 11, 2309. [Google Scholar] [CrossRef] [PubMed]
  28. Nguyen, D.-P.; Ho Ba Tho, M.-C.; Dao, T.-T. Enhanced facial expression recognition using 3D point sets and geometric deep learning. Med. Biol. Eng. Comput. 2021, 59, 1235–1244. [Google Scholar] [CrossRef] [PubMed]
  29. Parra-Dominguez, G.S.; Sanchez-Yanez, R.E.; Garcia-Capulin, C.H. Facial paralysis detection on images using key point analysis. Appl. Sci. 2021, 11, 2435. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Ding, L.; Xu, Z.; Zha, H.; Tang, X.; Li, C.; Xu, S.; Yan, Z.; Jia, J. The Feasibility of An Automatical Facial Evaluation System Providing Objective and Reliable Results for Facial Palsy. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1680–1686. [Google Scholar] [CrossRef]
  31. Vletter, C.; Burger, H.; Alers, H.; Sourlos, N.; Al-Ars, Z. Towards an Automatic Diagnosis of Peripheral and Central Palsy Using Machine Learning on Facial Features. arXiv 2022, arXiv:2201.11852. [Google Scholar]
  32. Kaggle. FER-2013. Available online: https://www.kaggle.com/msambare/fer2013 (accessed on 5 August 2022).
  33. Chandaliya, P.K.; Kumar, V.; Harjani, M.; Nain, N. Scdae: Ethnicity and gender alteration on CLF and UTKface dataset. In Proceedings of the International Conference on Computer Vision and Image Processing, Jaipur, India, 27–29 September 2019; pp. 294–306. [Google Scholar]
  34. Zahara, L.; Musa, P.; Wibowo, E.P.; Karim, I.; Musa, S.B. The facial emotion recognition (FER-2013) dataset for prediction system of micro-expressions face using the convolutional neural network (CNN) algorithm based Raspberry Pi. In Proceedings of the 2020 Fifth International Conference on Informatics and Computing (ICIC), Gorontalo, Indonesia, 3–4 November 2020; pp. 1–9. [Google Scholar]
  35. Liu, Y.; Xu, Z.; Ding, L.; Jia, J.; Wu, X. Automatic Assessment of Facial Paralysis Based on Facial Landmarks. In Proceedings of the 2021 IEEE 2nd International Conference on Pattern Recognition and Machine Learning (PRML), Chengdu, China, 16–18 July 2021; pp. 162–167. [Google Scholar]
  36. Kumar, G.; Bhatia, P.K. A detailed review of feature extraction in image processing systems. In Proceedings of the 2014 Fourth International Conference on Advanced Computing & Communication Technologies, Rohtak, India, 8–9 February 2014; pp. 5–12. [Google Scholar]
  37. Yustiawati, R.; Husni, N.L.; Evelina, E.; Rasyad, S.; Lutfi, I.; Silvia, A.; Alfarizal, N.; Rialita, A. Analyzing of different features using Haar cascade classifier. In Proceedings of the 2018 International Conference on Electrical Engineering and Computer Science (ICECOS), Pangkal, Indonesia, 2–4 October 2018; pp. 129–134. [Google Scholar]
  38. Codeluppi, L.; Venturelli, F.; Rossi, J.; Fasano, A.; Toschi, G.; Pacillo, F.; Cavallieri, F.; Giorgi Rossi, P.; Valzania, F. Facial palsy during the COVID-19 pandemic. Brain Behav. 2021, 11, e01939. [Google Scholar] [CrossRef]
  39. Ansari, S.A.; Jerripothula, K.R.; Nagpal, P.; Mittal, A. Eye-focused Detection of Bell’s Palsy in Videos. arXiv 2022, arXiv:2201.11479. [Google Scholar] [CrossRef]
  40. Saxena, K.; Khan, Z.; Singh, S. Diagnosis of diabetes mellitus using k nearest neighbor algorithm. Int. J. Comput. Sci. Trends Technol. (IJCST) 2014, 2, 36–43. [Google Scholar]
  41. Yao, J.; Shepperd, M. Assessing software defection prediction performance: Why using the Matthews correlation coefficient matters. In Proceedings of the Evaluation and Assessment in Software Engineering, Trondheim, Norway, 15–17 April 2020; pp. 120–129. [Google Scholar]
  42. Visa, S.; Ramsay, B.; Ralescu, A.L.; Van Der Knaap, E. Confusion matrix-based feature selection. Maics 2011, 710, 120–127. [Google Scholar]
  43. Caelen, O. A Bayesian interpretation of the confusion matrix. Ann. Math. Artif. Intell. 2017, 81, 429–450. [Google Scholar] [CrossRef]
Figure 1. The real-time proposed system.
Figure 1. The real-time proposed system.
Biomedinformatics 03 00031 g001
Figure 2. The practical part of the proposed system.
Figure 2. The practical part of the proposed system.
Biomedinformatics 03 00031 g002
Figure 3. The block diagram of the proposed diagnostic system design.
Figure 3. The block diagram of the proposed diagnostic system design.
Biomedinformatics 03 00031 g003
Figure 4. Result of the proposed diagnostic system for different male cases.
Figure 4. Result of the proposed diagnostic system for different male cases.
Biomedinformatics 03 00031 g004
Figure 5. Result of the proposed diagnostic system for different female cases.
Figure 5. Result of the proposed diagnostic system for different female cases.
Biomedinformatics 03 00031 g005
Figure 6. Real-time right FP patient diagnosis by the proposed system with age and gender detection.
Figure 6. Real-time right FP patient diagnosis by the proposed system with age and gender detection.
Biomedinformatics 03 00031 g006
Figure 7. Real-time normal person diagnosis by the proposed system with age and gender detection.
Figure 7. Real-time normal person diagnosis by the proposed system with age and gender detection.
Biomedinformatics 03 00031 g007
Figure 8. Diagnosis process for two people.
Figure 8. Diagnosis process for two people.
Biomedinformatics 03 00031 g008
Figure 9. Classifications of the confusion matrix.
Figure 9. Classifications of the confusion matrix.
Biomedinformatics 03 00031 g009
Figure 10. The confusion matrix.
Figure 10. The confusion matrix.
Biomedinformatics 03 00031 g010
Figure 11. Training and validation accuracy.
Figure 11. Training and validation accuracy.
Biomedinformatics 03 00031 g011
Figure 12. Training and validation loss.
Figure 12. Training and validation loss.
Biomedinformatics 03 00031 g012
Table 1. Shows the comparison between previous studies and the proposed system.
Table 1. Shows the comparison between previous studies and the proposed system.
WorkMethodTechniqueTraining ImagesTimeAccuracy
Ngo et al., 2016 [18]Facial palsyLO-MCGFs85 subjectsNot real-time81.2%
Jiang et al., 2020 [25]Facial palsyK-NN, SVM, and NN80 participantsNot real-time87.22% and 95.69%
Parra-Dominguez et al., 2021. [29]Facial Paralysismulti-layer perceptron480 imagesNot real-time94.06% to 97.22%
Vletter et al., 2022. [31]Facial paralysisKNN203 picturesNot real-time85.1%
Amsalam et al., 2023 [3]Facial palsyCNN570 imagesNot real-time98%
Proposed systemFacial palsyCNN20,600 imagesReal-time98%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amsalam, A.S.; Al-Naji, A.; Daeef, A.Y.; Chahl, J. Automatic Facial Palsy, Age and Gender Detection Using a Raspberry Pi. BioMedInformatics 2023, 3, 455-466. https://doi.org/10.3390/biomedinformatics3020031

AMA Style

Amsalam AS, Al-Naji A, Daeef AY, Chahl J. Automatic Facial Palsy, Age and Gender Detection Using a Raspberry Pi. BioMedInformatics. 2023; 3(2):455-466. https://doi.org/10.3390/biomedinformatics3020031

Chicago/Turabian Style

Amsalam, Ali Saber, Ali Al-Naji, Ammar Yahya Daeef, and Javaan Chahl. 2023. "Automatic Facial Palsy, Age and Gender Detection Using a Raspberry Pi" BioMedInformatics 3, no. 2: 455-466. https://doi.org/10.3390/biomedinformatics3020031

APA Style

Amsalam, A. S., Al-Naji, A., Daeef, A. Y., & Chahl, J. (2023). Automatic Facial Palsy, Age and Gender Detection Using a Raspberry Pi. BioMedInformatics, 3(2), 455-466. https://doi.org/10.3390/biomedinformatics3020031

Article Metrics

Back to TopTop