Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition
Abstract
:1. Introduction
- A set of geometric features is proposed, evaluated, and compared. These features are derived from the 2D and 3D geometry of the human face using angles and normalized distances between landmark points of the face.
- To obtain relevant features PCA as common technique and a GA as a new proposal are implemented and compared.
- The performances of four classifiers (k-NN, E-KNN (ensemble classifier subspace K-nearest neighbor), SVM3 (SVM using a cubic kernel), and SVM2 (SVM using a quadratic kernel)) are compared employing our features.
- A comparative study is presented. Our proposal compares favorably in terms of accuracy with other works in the literature using static images and Bosphorus database and also greatly lowering the number of used features. Reducing the computational cost of the classification.
2. Materials & Methods
2.1. Data Acquisition
2.1.1. Bosphorus Facial Database
2.1.2. Virtual Facial Expression Dataset UIBVFED
2.2. Feature Extraction
- IRISDO: It is the distance between the upper and the lower eyelids, i.e., the approximate iris diameter.
- ESO: It is the distance between the eye pupils, i.e., eye separation.
- ENSO: It is the distance between the center of ESO and below the nostrils, i.e., eye–nose separation.
- MNSO: It is the distance between the upper lip and the nostrils, i.e., mouth–nose separation.
- MWO: It is the distance between the left corner and right corner of the lips, i.e., mouth width.
2.3. Feature Selection
2.3.1. Principal Component Analysis (PCA) for Feature Selection
Algorithm 1: PCA algorithm |
Input The set of original features Output The new set of features
|
2.3.2. Genetic Algorithm (GA)
- Population size: 20
- Number of generations: 250
- Parent selection: the best two out of five randomly chosen individuals.
- Recombination: one point crossover
- Mutation: simple
- Use elitism to select the best individual
Algorithm 2: Genetic algorithm |
Input Population size, MAX GENERATION Output The best individual in all generations
|
2.4. Classification
3. Experiments and Results
- Assessment of the classification accuracy of the original feature set (see Section 2.2) using the Bosphorus database.
- Selection of a reduced feature set using PCA and assessment of classification accuracy using the Bosphorus database.
- Selection of a reduced feature set using our GA and assessment of classification accuracy using the Bosphorus database
- Assessment of the classification accuracy of the reduced feature set using GA on the UIBVFED database.
3.1. Original Feature Set and Performance Evaluation
3.2. Feature Selection Using PCA
3.3. Feature Selection Using GA
3.4. Evaluation on UIBVFEd Dataset
4. Discussion
4.1. Overall Performance of the Classifiers and Feature Sets
4.2. Number of Features
4.3. Comparison of Our Results with Previous Studies
4.3.1. Comparison on Bosphorus Dataset with Handcrafted Features
4.3.2. Comparison on Bosphorus Dataset with Deep Features
4.3.3. Comparison with Handcrafted Feature Methods on Other Datasets
5. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
Abbreviations
FACS | Facial Action Coding System |
FER | Facial Emotional Recognition |
LD | Large Displacements |
HPV | Head Pose Variations |
DL | Deep Learning |
AUs | Action Units |
SVM | Support Vector Machine |
k-NN | k-Nearest Neighbors |
AN | Anger |
DI | Disgust |
FE | Fear |
HA | Happiness |
SA | Sadness |
SU | Surprise |
FAP | Face Animation Parameters |
FDP | Facial Definition Parameters |
FAPUs | Facial Animation Parameter Units |
FA | Factor Analysis |
PCA | Principal Component Analysis |
FLD | Fisher Linear Discriminant |
GA | Genetic Algorithm |
E-KNN | Ensemble classifier with subspace using k-NN |
SVM3 | Cubic Support Vector Machines |
SVM2 | Quadratic Support Vector Machines |
angles | |
Dist | distance |
References
- Darwin, C.; Prodger, P. The Expression of the Emotions in Man and Animals; Oxford University Press, Inc.: New York, NY, USA, 1998. [Google Scholar]
- Suwa, M.; Sugie, N.; Fujimora, K. A Preliminary Note on Pattern Recognition of Human Emotional Expression. In Proceedings of the International Joint Conference on Pattern Recognition, Kyoto, Japan, 7–10 November 1978; pp. 408–410. [Google Scholar]
- Dalgleish, T.; Power, M.J. Handbook of Cognition and Emotion; Wiley Online Library: Chichester, UK, 1999. [Google Scholar]
- Mandal, M.K.; Awasthi, A. Understanding Facial Expressions in Communication; Springer: Delhi, India, 2015. [Google Scholar]
- García-Ramírez, J.; Olvera-López, J.A.; Olmos-Pineda, I.; Martín-Ortíz, M. Mouth and eyebrow segmentation for emotion recognition using interpolated polynomials. J. Intell. Fuzzy Syst. 2018, 34, 1–13. [Google Scholar]
- Rajan, S.; Chenniappan, P.; Devaraj, S.; Madian, N. Facial expression recognition techniques: A comprehensive survey. IET Image Process. 2019, 13, 1031–1040. [Google Scholar]
- Huang, Y.; Chen, F.; Lv, S.; Wang, X. Facial Expression Recognition: A Survey. Symmetry 2019, 11, 1189. [Google Scholar]
- Tian, Y.L.; Kanade, T.; Cohn, J.F. Facial Expression Analysis. In Handbook of Face Recognition; Springer: New York, NY, USA, 2005; pp. 487–519. [Google Scholar]
- Salahshoor, S.; Faez, K. 3D Face Recognition Using an Expression Insensitive Dynamic Mask. In International Conference on Image and Signal Processing; Springer: New York, NY, USA, 2012; pp. 253–260. [Google Scholar]
- Ujir, H. 3D Facial Expression Classification Using a Statistical Model of Surface Normals and a Modular Approach. Ph.D. Thesis, University of Birmingham, Birmingham, UK, 2013. [Google Scholar]
- Zhang, Y.; Zhang, L.; Hossain, M.A. Adaptive 3D Facial Action Intensity Estimation and Emotion Recognition. Expert Syst. Appl. 2015, 42, 1446–1464. [Google Scholar]
- Belmonte, R.; Ihaddadene, N.; Tirilly, P.; Bilasco, I.M.; Djeraba, C. Video-based Face Alignment with Local Motion Modeling. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2106–2115. [Google Scholar]
- Allaert, B.; Mennesson, J.; Bilasco, I.M.; Djeraba, C. Impact of the face registration techniques on facial expressions recognition. Signal Process. Image Commun. 2018, 61, 44–53. [Google Scholar]
- Cambria, E.; Hupont, I.; Hussain, A.; Cerezo, E.; Baldassarri, S. Sentic Avatar: Multimodal Affective Conversational Agent with Common Sense. In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6456, pp. 81–95. [Google Scholar]
- Kahraman, Y. Facial Expression Recognition Using Geometric Features. In Proceedings of the Systems, Signals and Image Processing (IWSSIP), 2016 International Conference, Bratislava, Slovakia, 23–25 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–5. [Google Scholar]
- Li, X.; Ruan, Q.; Ming, Y. 3D Facial Expression Recognition Based on Basic Geometric Features. In Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Beijing, China, 24–28 October 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1366–1369. [Google Scholar]
- Tang, H.; Huang, T.S. 3D Facial Expression Recognition Based on Properties of Line Segments Connecting Facial Feature Points. In Proceedings of the 8th IEEE International Conference on Automatic Face & Gesture Recognition, 2008, FG’08, Amsterdam, The Netherlands, 17–19 September 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. [Google Scholar]
- Allaert, B.; Bilasco, I.M.; Djeraba, C. Micro and macro facial expression recognition using advanced Local Motion Patterns. IEEE Trans. Affect. Comput. 2019. [Google Scholar] [CrossRef] [Green Version]
- Ko, B.C. A brief review of facial emotion recognition based on visual information. Sensors 2018, 18, 401. [Google Scholar]
- Corneanu, C.A.; Simón, M.O.; Cohn, J.F.; Guerrero, S.E. Survey on rgb, 3D, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1548–1568. [Google Scholar]
- Li, S.; Deng, W. Deep facial expression recognition: A survey. IEEE Trans. Affect. Comput. 2020. [Google Scholar] [CrossRef] [Green Version]
- Savran, A.; Alyüz, N.; Dibeklioğlu, H.; Çeliktutan, O.; Gökberk, B.; Sankur, B.; Akarun, L. Bosphorus Database for 3D Face Analysis. In Biometrics and Identity Management; Springer: Berlin/Heidelberg, Germany, 2008; pp. 47–56. [Google Scholar]
- Oliver, M.M.; Amengual Alcover, E. UIBVFED: Virtual facial expression dataset. PLoS ONE 2020, 15, e0231266. [Google Scholar]
- Savran, A.; Sankur, B.; Bilge, M.T. Regression-Based Intensity Estimation of Facial Action Units. Image Vis. Comput. 2012, 30, 774–784. [Google Scholar]
- Konar, A.; Chakraborty, A. Emotion Recognition: A Pattern Analysis Approach; John Wiley & Sons: Kolkata, India, 2014. [Google Scholar]
- Du, S.; Tao, Y.; Martinez, A.M. Compound Facial Expressions of Emotion. Proc. Natl. Acad. Sci. USA 2014, 111, 1454–1462. [Google Scholar]
- Hemalatha, G.; Sumathi, C. A Study of Techniques for Facial Detection and Expression Classification. Int. J. Comput. Sci. Eng. Surv. 2014, 5, 27–37. [Google Scholar]
- Pandzic, I.S.; Forchheimer, R. MPEG-4 Facial Animation: The Standard, Implementation and Applications; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
- Tekalp, A.M.; Ostermann, J. Face and 2-D Mesh Animation in MPEG-4. Signal Process. Image Commun. 2000, 15, 387–421. [Google Scholar]
- Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2015, 20, 606–626. [Google Scholar]
- Gui, J.; Sun, Z.; Ji, S.; Tao, D.; Tan, T. Feature selection based on structured sparsity: A comprehensive study. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 1490–1507. [Google Scholar]
- Dunteman, G.H. Principal Components Analysis; Number 69; Sage: Chichester, UK, 1989. [Google Scholar]
- Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: London, UK, 1998. [Google Scholar]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
- MathWorks. Ensemble Algorithms. Available online: https://www.mathworks.com/help/stats/ensemble-algorithms.html? (accessed on 1 June 2017).
- Murphy, K.P. Machine Learning, A Probabilistic Perspective; The MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Li, H.; Sun, J.; Xu, Z.; Chen, L. Multimodal 2D + 3D facial expression recognition with deep fusion convolutional neural network. IEEE Trans. Multimed. 2017, 19, 2816–2831. [Google Scholar]
- Tian, K.; Zeng, L.; McGrath, S.; Yin, Q.; Wang, W. 3D Facial Expression Recognition Using Deep Feature Fusion CNN. In Proceedings of the 2019 30th Irish Signals and Systems Conference (ISSC), Maynooth, Ireland, 17–18 June 2019; IIEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Goulart, C.; Valadão, C.; Delisle-Rodriguez, D.; Funayama, D.; Favarato, A.; Baldo, G.; Binotte, V.; Caldeira, E.; Bastos-Filho, T. Visual and Thermal Image Processing for Facial Specific Landmark Detection to Infer Emotions in a Child-Robot Interaction. Sensors 2019, 19, 2844. [Google Scholar]
- Oh, S.G.; Kim, T. Facial Expression Recognition by Regional Weighting with Approximated Q-Learning. Symmetry 2020, 12, 319. [Google Scholar]
Database | Samples | Subject | Content | Temporality |
---|---|---|---|---|
Bosphorus [22] | 4652 | 105 | Poses with different occlusion conditions | Static |
and the six basic expressions and the neutral state | ||||
UIBVFED: Virtual facial | 640 | 20 | 32 expressions | Static |
expression dataset [23] |
Facial Expression | Instances |
---|---|
Surprise | 63 |
Sadness | 66 |
Happiness | 99 |
Fear | 62 |
Disgust | 64 |
Anger | 70 |
Total | 424 |
Facial Expression | Eyebrows | Inner Eyebrows | Eyes | Mouth | Jaw |
---|---|---|---|---|---|
Anger | - | Pulled downward and together | Wide open | Lips are pressed against each other or opened to expose the teeth | - |
Disgust | Relaxed | - | Eyelids: relaxed | Upper lip: raised and curled, frequently asymmetric | - |
Fear | Raised and pulled together | Bent upward | Tense and alert | - | |
Happiness | Relaxed | - | - | Open. Corners of the mouth: pulled back toward the ears | - |
Sadness | - | Bent upward | Slightly closed | Relaxed | - |
Surprise | Raised | - | Upper eyelids: wide open Lower eyelids: relaxed | - | Open |
3D Features | 2D Features | ||||||||
---|---|---|---|---|---|---|---|---|---|
# | ID | Dist | ID | # | ID | Dist | ID | ||
1 | 1D3 | 1A3 | 1 | 1D2 | 1A2 | ||||
2 | 2D3 | 2A3 | 2 | 2D2 | 2A2 | ||||
3 | 3D3 | 3A3 | 3 | 3D2 | 3A2 | ||||
4 | 4D3 | 4A3 | 4 | 4D2 | 4A2 | ||||
5 | 5D3 | 5A3 | 5 | 5D2 | 5A2 | ||||
6 | 6D3 | 6A3 | 6 | 6D2 | 6A2 | ||||
7 | 7D3 | 7A3 | 7 | 7D2 | 7A2 | ||||
8 | 8D3 | 8A3 | 8 | 8D2 | 8A2 | ||||
9 | 9D3 | 9A3 | 9 | 9D2 | 9A2 | ||||
10 | 10D3 | 10A3 | 10 | 10D2 | 10A2 | ||||
11 | 11D3 | 11A3 | 11 | 11D2 | 11A2 | ||||
12 | 12D3 | 12A3 | 12 | 12D2 | 12A2 | ||||
13 | 13D3 | 13A3 | 13 | 13D2 | 13A2 | ||||
14 | 14D3 | 14A3 | 14 | 14D2 | 14A2 | ||||
15 | 15D3 | 15A3 | 15 | 15D2 | 15A2 | ||||
16 | 16D3 | 16A3 | 16 | 16D2 | 16A2 | ||||
17 | 17D3 | 17A3 | 17 | 17D2 | 17A2 | ||||
18 | 18D3 | 18A3 | 18 | 18D2 | 18A2 | ||||
19 | 19D3 | 19A3 | 19 | 19D2 | 19A2 | ||||
20 | 20A3 | 20 | 20D2 | 20A2 | |||||
21 | 21A3 | 21 | 21D2 | ||||||
22 | 22A3 | 22 | 22D2 | ||||||
23 | 23A3 | 23 | 23D2 | ||||||
24 | 24A3 | 24 | |||||||
25 | 25A3 | 25 | |||||||
26 | 26A3 | 26 | |||||||
27 | 27A3 | 27 |
Measure | Classifier | |||
---|---|---|---|---|
SVM3 | SVM2 | kNN | E-KNN | |
Standard deviation | 0.50 | 0.38 | 0.55 | 0.48 |
Median accuracy | 85.25 | 83.17 | 83.01 | 84.61 |
Mean accuracy | 85.11 | 83.17 | 83.07 | 84.65 |
Maximum accuracy | 85.73 | 83.81 | 84.29 | 85.41 |
Minimum accuracy | 84.29 | 82.53 | 82.21 | 83.81 |
% | SU | SA | HA | FE | DI | AN |
---|---|---|---|---|---|---|
SU | 79 | 0 | 1 | 18 | 2 | 0 |
SA | 0 | 90 | 0 | 1 | 3 | 6 |
HA | 0 | 0 | 97 | 1 | 2 | 0 |
FE | 12 | 3 | 0 | 78 | 7 | 1 |
DI | 0 | 5 | 1 | 4 | 84 | 7 |
AN | 0 | 12 | 0 | 1 | 4 | 84 |
Reduced Features Using PCA | |||
---|---|---|---|
% Variance | 97% | 98% | 99% |
Accuracy | 75.48% | 77.24% | 81.25% |
Features | 21 | 27 | 39 |
Measure | |
---|---|
Standard deviation | 1.03 |
Median accuracy | 81.08 |
Mean accuracy | 81.20 |
Maximum accuracy | 82.85 |
Minimum accuracy | 79.16 |
% | SU | SA | HA | FE | DI | AN |
---|---|---|---|---|---|---|
SU | 81 | 0 | 0 | 15 | 4 | 0 |
SA | 0 | 83 | 0 | 1 | 9 | 8 |
HA | 0 | 0 | 91 | 1 | 8 | 0 |
FE | 15 | 4 | 0 | 75 | 5 | 1 |
DI | 2 | 11 | 2 | 4 | 76 | 6 |
AN | 0 | 12 | 0 | 2 | 5 | 82 |
3D Features | 2D Features | Number of Features | Average Accuracy | |||||
---|---|---|---|---|---|---|---|---|
Dist | Total | Dist | Total | |||||
Worst fit | 12 | 9 | 21 | 12 | 13 | 25 | 46 | 87.82% |
Best fit | 14 | 12 | 26 | 8 | 13 | 21 | 47 | 89.58% |
3D Features | 2D Features | ||||||||
---|---|---|---|---|---|---|---|---|---|
# | ID | Dist | ID | # | ID | Dist | ID | ||
1 | 2D3 | 2A3 | 1 | 1D2 | 5A2 | ||||
2 | 4D3 | 3A3 | 2 | 2D2 | 9A2 | ||||
3 | 6D3 | 4A3 | 3 | 4D2 | 10A2 | ||||
4 | 8D3 | 5A3 | 4 | 5D2 | 12A2 | ||||
5 | 9D3 | 7A3 | 5 | 7D2 | 14A2 | ||||
6 | 10D3 | 11A3 | 6 | 10D2 | 15A2 | ||||
7 | 11D3 | 12A3 | 7 | 12D2 | 18A2 | ||||
8 | 12D3 | 15A3 | 8 | 14D2 | 20A2 | ||||
9 | 13D3 | 19A3 | 9 | 16D2 | |||||
10 | 14D3 | 21A3 | 10 | 17D2 | |||||
11 | 16D3 | 23A3 | 11 | 19D2 | |||||
12 | 18D3 | 24A3 | 12 | 20D2 | |||||
13 | 26A3 | 13 | 21D2 | ||||||
14 | 27A3 | 14 |
Measure | |
---|---|
Standard deviation | 0.73 |
Median accuracy | 86.69 |
Mean accuracy | 86.62 |
Maximum accuracy | 87.17 |
Minimum accuracy | 85.25 |
% | SU | SA | HA | FE | DI | AN |
---|---|---|---|---|---|---|
SU | 81 | 0 | 0 | 16 | 3 | 0 |
SA | 0 | 93 | 0 | 1 | 5 | 1 |
HA | 0 | 0 | 96 | 1 | 3 | 0 |
FE | 16 | 1 | 0 | 77 | 6 | 0 |
DI | 0 | 4 | 0 | 2 | 89 | 5 |
AN | 0 | 11 | 0 | 0 | 6 | 84 |
Measure | |
---|---|
Standard deviation | 1.11 |
Median accuracy | 93.75 |
Mean accuracy | 93.92 |
Maximum accuracy | 95.83 |
Minimum accuracy | 92.50 |
% | SU | SA | HA | FE | DI | AN |
---|---|---|---|---|---|---|
SU | 85 | 0 | 0 | 15 | 0 | 0 |
SA | 0 | 85 | 0 | 0 | 15 | 0 |
HA | 0 | 0 | 100 | 0 | 0 | 0 |
FE | 0 | 0 | 0 | 100 | 0 | 0 |
DI | 0 | 5 | 0 | 0 | 95 | 0 |
AN | 0 | 0 | 0 | 0 | 0 | 100 |
Acuraccy | Features | Database | |
---|---|---|---|
Original feature set | 85.11% | 89 | Bosphorus |
Best PCA | 81.2% | 39 | Bosphorus |
GA | 86.62% | 47 | Bosphorus |
GA | 93.92% | 47 | UIBVFED |
3D Features | 2D Features | Time to Classify | ||||
---|---|---|---|---|---|---|
Dist | Dist | Total | New Instance (ms) | |||
Original feature set | 27 | 19 | 20 | 23 | 89 | 0.00046364 |
Reduced feature set obtained via a GA | 14 | 12 | 8 | 13 | 47 | 0.00032031 |
Reduction percentage | 48.15% | 36.84% | 60% | 43.48% | 47.2% |
Proposed GA | Salahshoor & Faez (2012) | Ujir (2013) | Zhang et al. (2015) | |
---|---|---|---|---|
Approach | Static | Static | Static | Dynamic |
Classes | 6 | 6 | 6 | 6 |
Feature Selection Methods | GA | None | mRMR | mRMR |
Features | 47 | 21600 | 115 | 64 |
Decision Methods | SVM | Modified Knn | Voting Schema (SVM’s) | Adaptive Ensemble Classifier |
Accuracy (%) | 86.62% | 85.36% | 66% | 92.2% |
Proposed GA | Li et al. (2017) | Kun Tin (2019) | |
---|---|---|---|
Data | 2D + 3D | 2D + 3D | 2D + 3D |
Features | 47 | 32-D Deep Feature | Deep Feature Fusion |
Classifier | SVM3 | SVM | CNN |
Accuracy | 86.62% | 79.17% | 80.28% |
Proposed Median Accuracy | Proposed Median Accuracy | Goulart et al. (2019) | Oh & Kim (2020) | |
---|---|---|---|---|
Approach | Static | Static | Static | Dynamic |
Data Base | Bosphorus | UIBVFED | Cohn–Kanade + Preprocesing database | Own |
Classes | 6 | 6 | 7 | 5 |
Feature Selection Methods | GA | GA | PCA + FNCA | Grid Map |
Features | 47 | 47 | 60 | 2912 |
Decision Methods | SVM | SVM | SVM | ECOC-SVM |
Accuracy (%) | 86.62% | 93.92% | 89.98% | 98.47% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Perez-Gomez, V.; Rios-Figueroa, H.V.; Rechy-Ramirez, E.J.; Mezura-Montes, E.; Marin-Hernandez, A. Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition. Sensors 2020, 20, 4847. https://doi.org/10.3390/s20174847
Perez-Gomez V, Rios-Figueroa HV, Rechy-Ramirez EJ, Mezura-Montes E, Marin-Hernandez A. Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition. Sensors. 2020; 20(17):4847. https://doi.org/10.3390/s20174847
Chicago/Turabian StylePerez-Gomez, Vianney, Homero V. Rios-Figueroa, Ericka Janet Rechy-Ramirez, Efrén Mezura-Montes, and Antonio Marin-Hernandez. 2020. "Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition" Sensors 20, no. 17: 4847. https://doi.org/10.3390/s20174847
APA StylePerez-Gomez, V., Rios-Figueroa, H. V., Rechy-Ramirez, E. J., Mezura-Montes, E., & Marin-Hernandez, A. (2020). Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition. Sensors, 20(17), 4847. https://doi.org/10.3390/s20174847