Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = facial geometry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 978 KB  
Article
Integrative Innovation in Genioplasty: Advanced 3D Plate Design: Promoting Stability, Aesthetics, and Harmony Excellence
by Bruno Nifossi Prado, Lucas Cavalieri Pereira, Bianca Pulino and Raphael Capelli Guerra
Craniomaxillofac. Trauma Reconstr. 2025, 18(3), 42; https://doi.org/10.3390/cmtr18030042 - 22 Sep 2025
Viewed by 330
Abstract
Background: Genioplasty is a well-established surgical technique for reshaping the chin and enhancing facial harmony. However, conventional fixation methods may present biomechanical and aesthetic limitations. Objective: This study introduces and evaluates a novel Anatomical Chin Plate (ACP), designed to enhance mechanical performance and [...] Read more.
Background: Genioplasty is a well-established surgical technique for reshaping the chin and enhancing facial harmony. However, conventional fixation methods may present biomechanical and aesthetic limitations. Objective: This study introduces and evaluates a novel Anatomical Chin Plate (ACP), designed to enhance mechanical performance and facial aesthetics compared to the conventional chin plate (CP). Methods: A three-dimensional finite element analysis (FEA) was conducted to compare stress distribution in ACP and CP models under a standardized oblique load of 60 N, simulating muscle forces from the mentalis and digastric muscles. Plates were modeled using Blender and analyzed using ANSYS software 2025 r2. Mechanical behavior was assessed based on von Mises stress, concentration sites, and potential for plastic deformation or fatigue failure. Results: The ACP demonstrated a significantly lower maximum von Mises stress (77.19 MPa) compared to the CP (398.48 MPa). Stress distribution in the ACP was homogeneous, particularly around the lateral fixation holes, while the CP exhibited concentrated stress between central screw holes. These findings indicate that the anatomical geometry of the ACP enhances load dispersion, reduces critical stress concentrations, and minimizes fatigue risk. Conclusions: The ACP design offers superior biomechanical behavior and improved aesthetic potential for genioplasty procedures. Its optimized shape allows for better integration with facial anatomy while providing stable fixation. Further studies are recommended to validate in vitro performance and explore clinical applicability in advanced genioplasty and complex osteotomies. Full article
(This article belongs to the Special Issue Innovation in Oral- and Cranio-Maxillofacial Reconstruction)
Show Figures

Figure 1

22 pages, 3733 KB  
Article
AI-Assisted Fusion Technique for Orthodontic Diagnosis Between Cone-Beam Computed Tomography and Face Scan Data
by Than Trong Khanh Dat, Jang-Hoon Ahn, Hyunkyo Lim and Jonghun Yoon
Bioengineering 2025, 12(9), 975; https://doi.org/10.3390/bioengineering12090975 - 14 Sep 2025
Viewed by 816
Abstract
This study presents a deep learning-based approach that integrates cone-beam computed tomography (CBCT) with facial scan data, aiming to enhance diagnostic accuracy and treatment planning in medical imaging, particularly in cosmetic surgery and orthodontics. The method combines facial mesh detection with the iterative [...] Read more.
This study presents a deep learning-based approach that integrates cone-beam computed tomography (CBCT) with facial scan data, aiming to enhance diagnostic accuracy and treatment planning in medical imaging, particularly in cosmetic surgery and orthodontics. The method combines facial mesh detection with the iterative closest point (ICP) algorithm to address common challenges such as differences in data acquisition times and extraneous details in facial scans. By leveraging a deep learning model, the system achieves more precise facial mesh detection, thereby enabling highly accurate initial alignment. Experimental results demonstrate average registration errors of approximately 0.3 mm (inlier RMSE), even when CBCT and facial scans are acquired independently. These results should be regarded as preliminary, representing a feasibility study rather than conclusive evidence of clinical accuracy. Nevertheless, the approach demonstrates consistent performance across different scan orientations, suggesting potential for future clinical application. Furthermore, the deep learning framework effectively handles diverse and complex facial geometries, thereby improving the reliability of the alignment process. This integration not only enhances the precision of 3D facial recognition but also improves the efficiency of clinical workflows. Future developments will aim to reduce processing time and enable simultaneous data capture to further improve accuracy and operational efficiency. Overall, this approach provides a powerful tool for practitioners, contributing to improved diagnostic outcomes and optimized treatment strategies in medical imaging. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

13 pages, 706 KB  
Article
Enhancing 3D Face Recognition: Achieving Significant Gains via 2D-Aided Generative Augmentation
by Cuican Yu, Zihui Zhang, Huibin Li and Chang Liu
Sensors 2025, 25(16), 5049; https://doi.org/10.3390/s25165049 - 14 Aug 2025
Viewed by 705
Abstract
The development of deep learning-based 3D face recognition has been constrained by the limited availability of large-scale 3D facial datasets, which are costly and labor-intensive to acquire. To address this challenge, we propose a novel 2D-aided framework that reconstructs 3D face geometries from [...] Read more.
The development of deep learning-based 3D face recognition has been constrained by the limited availability of large-scale 3D facial datasets, which are costly and labor-intensive to acquire. To address this challenge, we propose a novel 2D-aided framework that reconstructs 3D face geometries from abundant 2D images, enabling scalable and cost-effective data augmentation for 3D face recognition. Our pipeline integrates 3D face reconstruction with normal component image encoding and fine-tunes a deep face recognition model to learn discriminative representations from synthetic 3D data. Experimental results on four public benchmarks, i.e., the BU-3DFE, FRGC v2, Bosphorus, and BU-4DFE databases, demonstrate competitive rank-1 accuracies of 99.2%, 98.4%, 99.3%, and 96.5%, respectively, despite the absence of real 3D training data. We further evaluate the impact of alternative reconstruction methods and empirically demonstrate that higher-fidelity 3D inputs improve recognition performance. While synthetic 3D face data may lack certain fine-grained geometric details, our results validate their effectiveness for practical recognition tasks under diverse expressions and demographic conditions. This work provides an efficient and scalable paradigm for 3D face recognition by leveraging widely available face images, offering new insights into data-efficient training strategies for biometric systems. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition Based on Sensing Technology)
Show Figures

Figure 1

13 pages, 3914 KB  
Article
Biomechanical Analysis of Different Pacifiers and Their Effects on the Upper Jaw and Tongue
by Luca Levrini, Luigi Paracchini, Luigia Ricci, Maria Sparaco, Stefano Saran and Giulia Mulè
Appl. Sci. 2025, 15(15), 8624; https://doi.org/10.3390/app15158624 - 4 Aug 2025
Cited by 1 | Viewed by 2298
Abstract
Aim: Pacifiers play a critical role in the early stages of craniofacial and palate development during infancy. While they provide comfort and aid in soothing, their use can also have significant impacts on the growth and function of the oral cavity. This study [...] Read more.
Aim: Pacifiers play a critical role in the early stages of craniofacial and palate development during infancy. While they provide comfort and aid in soothing, their use can also have significant impacts on the growth and function of the oral cavity. This study aimed to simulate and predict the behavior of six different types of pacifiers and their functional interaction with the tongue and palate, with the goal of understanding their potential effects on orofacial growth and development. Materials and Methods: Biomechanical analysis using Finite Element Analysis (FEA) mathematical models was employed to evaluate the behavior of six different commercial pacifiers in contact with the palate and tongue. Three-dimensional solid models of the palate and tongue were based on the mathematical framework from a 2007 publication. This allowed for a detailed investigation into how various pacifier designs interact with soft and hard oral tissues, particularly the implications on dental and skeletal development. Results: The findings of this study demonstrate that pacifiers exhibit different interactions with the oral cavity depending on their geometry. Anatomical–functional pacifiers, for instance, tend to exert lateral compressions near the palatine vault, which can influence the hard palate and contribute to changes in craniofacial growth. In contrast, other pacifiers apply compressive forces primarily in the anterior region of the palate, particularly in the premaxilla area. Furthermore, the deformation of the tongue varied significantly across different pacifier types: while some pacifiers caused the tongue to flatten, others allowed it to adapt more favorably by assuming a concave shape. These variations highlight the importance of selecting a pacifier that aligns with the natural development of both soft and hard oral tissues. Conclusions: The results of this study underscore the crucial role of pacifier geometry in shaping both the palate and the tongue. These findings suggest that pacifiers have a significant influence not only on facial bone growth but also on the stimulation of oral functions such as suction and feeding. The geometry of the pacifier affects the soft tissues (tongue and muscles) and hard tissues (palate and jaw) differently, which emphasizes the need for careful selection of pacifiers during infancy. Choosing the right pacifier is essential to avoid potential negative effects on craniofacial development and to ensure that the benefits of proper oral function are maintained. Therefore, healthcare professionals and parents should consider these biomechanical factors when introducing pacifiers to newborns. Full article
Show Figures

Figure 1

20 pages, 3386 KB  
Article
Design of Realistic and Artistically Expressive 3D Facial Models for Film AIGC: A Cross-Modal Framework Integrating Audience Perception Evaluation
by Yihuan Tian, Xinyang Li, Zuling Cheng, Yang Huang and Tao Yu
Sensors 2025, 25(15), 4646; https://doi.org/10.3390/s25154646 - 26 Jul 2025
Viewed by 743
Abstract
The rise of virtual production has created an urgent need for both efficient and high-fidelity 3D face generation schemes for cinema and immersive media, but existing methods are often limited by lighting–geometry coupling, multi-view dependency, and insufficient artistic quality. To address this, this [...] Read more.
The rise of virtual production has created an urgent need for both efficient and high-fidelity 3D face generation schemes for cinema and immersive media, but existing methods are often limited by lighting–geometry coupling, multi-view dependency, and insufficient artistic quality. To address this, this study proposes a cross-modal 3D face generation framework based on single-view semantic masks. It utilizes Swin Transformer for multi-level feature extraction and combines with NeRF for illumination decoupled rendering. We utilize physical rendering equations to explicitly separate surface reflectance from ambient lighting to achieve robust adaptation to complex lighting variations. In addition, to address geometric errors across illumination scenes, we construct geometric a priori constraint networks by mapping 2D facial features to 3D parameter space as regular terms with the help of semantic masks. On the CelebAMask-HQ dataset, this method achieves a leading score of SSIM = 0.892 (37.6% improvement from baseline) with FID = 40.6. The generated faces excel in symmetry and detail fidelity with realism and aesthetic scores of 8/10 and 7/10, respectively, in a perceptual evaluation with 1000 viewers. By combining physical-level illumination decoupling with semantic geometry a priori, this paper establishes a quantifiable feedback mechanism between objective metrics and human aesthetic evaluation, providing a new paradigm for aesthetic quality assessment of AI-generated content. Full article
(This article belongs to the Special Issue Convolutional Neural Network Technology for 3D Imaging and Sensing)
Show Figures

Figure 1

46 pages, 6649 KB  
Review
Matrix WaveTM System for Mandibulo-Maxillary Fixation—Just Another Variation on the MMF Theme?—Part II: In Context to Self-Made Hybrid Erich Arch Bars and Commercial Hybrid MMF Systems—Literature Review and Analysis of Design Features
by Carl-Peter Cornelius, Paris Georgios Liokatis, Timothy Doerr, Damir Matic, Stefano Fusetti, Michael Rasse, Nils Claudius Gellrich, Max Heiland, Warren Schubert and Daniel Buchbinder
Craniomaxillofac. Trauma Reconstr. 2025, 18(3), 33; https://doi.org/10.3390/cmtr18030033 - 15 Jul 2025
Viewed by 1084
Abstract
Study design: Trends in the utilization of Mandibulo-Maxillary Fixation (MMF) are shifting nowadays from tooth-borne devices over specialized screws to hybrid MMF devices. Hybrid MMF devices come in self-made Erich arch bar modifications and commercial hybrid MMF systems (CHMMFSs). Objective: We survey the [...] Read more.
Study design: Trends in the utilization of Mandibulo-Maxillary Fixation (MMF) are shifting nowadays from tooth-borne devices over specialized screws to hybrid MMF devices. Hybrid MMF devices come in self-made Erich arch bar modifications and commercial hybrid MMF systems (CHMMFSs). Objective: We survey the available technical/clinical data. Hypothetically, the risk of tooth root damage by transalveolar screws is diminished by a targeting function of the screw holes/slots. Methods: We utilize a literature review and graphic displays to disclose parallels and dissimilarities in design and functionality with an in-depth look at the targeting properties. Results: Self-made hybrid arch bars have limitations to meet low-risk interradicular screw insertion sites. Technical/clinical information on CHMMFSs is unevenly distributed in favor of the SMARTLock System: positive outcome variables are increased speed of application/removal, the possibility to eliminate wiring and stick injuries and screw fixation with standoff of the embodiment along the attached gingiva. Inferred from the SMARTLock System, all four CHMMFs possess potential to effectively prevent tooth root injuries but are subject to their design features and targeting with the screw-receiving holes. The height profile and geometry shape of a CHMMFS may restrict three-dimensional spatial orientation and reach during placement. To bridge between interradicular spaces and tooth equators, where hooks or tie-up-cleats for intermaxillary cerclages should be ideally positioned under biomechanical aspects, can be problematic. The movability of their screw-receiving holes according to all six degrees of freedom differs. Conclusion: CHMMFSs allow simple immobilization of facial fractures involving dental occlusion. The performance in avoiding tooth root damage is a matter of design subtleties. Full article
Show Figures

Figure 1

28 pages, 12965 KB  
Review
Matrix WaveTM System for Mandibulo-Maxillary Fixation—Just Another Variation on the MMF Theme? Part I: A Review on the Provenance, Evolution and Properties of the System
by Carl-Peter Cornelius, Paris Georgios Liokatis, Timothy Doerr, Damir Matic, Stefano Fusetti, Michael Rasse, Nils Claudius Gellrich, Max Heiland, Warren Schubert and Daniel Buchbinder
Craniomaxillofac. Trauma Reconstr. 2025, 18(3), 32; https://doi.org/10.3390/cmtr18030032 - 12 Jul 2025
Cited by 1 | Viewed by 1805
Abstract
Study design: The advent of the Matrix WaveTM System (Depuy-Synthes)—a bone-anchored Mandibulo-Maxillary Fixation (MMF) System—merits closer consideration because of its peculiarities. Objective: This study alludes to two preliminary stages in the evolution of the Matrix WaveTM MMF System and details its [...] Read more.
Study design: The advent of the Matrix WaveTM System (Depuy-Synthes)—a bone-anchored Mandibulo-Maxillary Fixation (MMF) System—merits closer consideration because of its peculiarities. Objective: This study alludes to two preliminary stages in the evolution of the Matrix WaveTM MMF System and details its technical and functional features. Results: The Matrix WaveTM System (MWS) is characterized by a smoothed square-shaped Titanium rod profile with a flexible undulating geometry distinct from the flat plate framework in Erich arch bars. Single MWS segments are Omega-shaped and carry a tie-up cleat for interarch linkage to the opposite jaw. The ends at the throughs of each MWS segment are equipped with threaded screw holes to receive locking screws for attachment to underlying mandibular or maxillary bone. An MWS can be partitioned into segments of various length from single Omega-shaped elements over incremental chains of interconnected units up to a horseshoe-shaped bracing of the dental arches. The sinus wave design of each segment allows for stretch, compression and torque movements. So, the entire MWS device can conform to distinctive spatial anatomic relationships. Displaced fragments can be reduced by in-situ-bending of the screw-fixated MWS/Omega segments to obtain accurate realignment of the jaw fragments for the best possible occlusion. Conclusion: The Matrix WaveTM MMF System is an easy-to-apply modular MMF system that can be assembled according to individual demands. Its versatility allows to address most facial fracture scenarios in adults. The option of “omnidirectional” in-situ-bending provides a distinctive feature not found in alternate MMF solutions. Full article
Show Figures

Figure 1

41 pages, 5112 KB  
Article
Deepfake Face Detection and Adversarial Attack Defense Method Based on Multi-Feature Decision Fusion
by Shanzhong Lei, Junfang Song, Feiyang Feng, Zhuyang Yan and Aixin Wang
Appl. Sci. 2025, 15(12), 6588; https://doi.org/10.3390/app15126588 - 11 Jun 2025
Cited by 1 | Viewed by 3255
Abstract
The rapid advancement in deep forgery technology in recent years has created highly deceptive face video content, posing significant security risks. Detecting these fakes is increasingly urgent and challenging. To improve the accuracy of deepfake face detection models and strengthen their resistance to [...] Read more.
The rapid advancement in deep forgery technology in recent years has created highly deceptive face video content, posing significant security risks. Detecting these fakes is increasingly urgent and challenging. To improve the accuracy of deepfake face detection models and strengthen their resistance to adversarial attacks, this manuscript introduces a method for detecting forged faces and defending against adversarial attacks based on a multi-feature decision fusion. This approach allows for rapid detection of fake faces while effectively countering adversarial attacks. Firstly, an improved IMTCCN network was employed to precisely extract facial features, complemented by a diffusion model for noise reduction and artifact removal. Subsequently, the FG-TEFusionNet (Facial-geometry and Texture enhancement fusion-Net) model was developed for deepfake face detection and assessment. This model comprises two key modules: one for extracting temporal features between video frames and another for spatial features within frames. Initially, a facial geometry landmark calibration module based on the LRNet baseline framework ensured an accurate representation of facial geometry. A SENet attention mechanism was then integrated into the dual-stream RNN to enhance the model’s capability to extract inter-frame information and derive preliminary assessment results based on inter-frame relationships. Additionally, a Gram image texture feature module was designed and integrated into EfficientNet and the attention maps of WSDAN (Weakly Supervised Data Augmentation Network). This module aims to extract deep-level feature information from the texture structure of image frames, addressing the limitations of purely geometric features. The final decisions from both modules were integrated using a voting method, completing the deepfake face detection process. Ultimately, the model’s robustness was validated by generating adversarial samples using the I-FGSM algorithm and optimizing model performance through adversarial training. Extensive experiments demonstrated the superior performance and effectiveness of the proposed method across four subsets of FaceForensics++ and the Celeb-DF dataset. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 7075 KB  
Article
Visual Geometry Group-SwishNet-Based Asymmetric Facial Emotion Recognition for Multi-Face Engagement Detection in Online Learning Environments
by Qiaohong Yao, Mengmeng Wang and Yubin Li
Symmetry 2025, 17(5), 711; https://doi.org/10.3390/sym17050711 - 7 May 2025
Viewed by 1034
Abstract
In the contemporary global educational environment, the automatic assessment of students’ online engagement has garnered widespread attention. A substantial number of studies have demonstrated that facial expressions are a crucial indicator for measuring engagement. However, due to the asymmetry inherent in facial expressions [...] Read more.
In the contemporary global educational environment, the automatic assessment of students’ online engagement has garnered widespread attention. A substantial number of studies have demonstrated that facial expressions are a crucial indicator for measuring engagement. However, due to the asymmetry inherent in facial expressions and the varying degrees of deviation of students’ faces from a camera, significant challenges have been posed to accurate emotion recognition in the online learning environment. To address these challenges, this work proposes a novel VGG-SwishNet model, which is based on the VGG-16 model and aims to enhance the recognition ability of asymmetric facial expressions, thereby improving the reliability of student engagement assessment in online education. The Swish activation function is introduced into the model due to its smoothness and self-gating mechanism. Its smoothness aids in stabilizing gradient updates during backpropagation and facilitates better handling of minor variations in input data. This enables the model to more effectively capture subtle differences and asymmetric variations in facial expressions. Additionally, the self-gating mechanism allows the function to automatically adjust its degree of nonlinearity. This helps the model to learn more effective asymmetric feature representations and mitigates the vanishing gradient problem to some extent. Subsequently, this model was applied to the assessment of engagement and provided a visualization of the results. In terms of performance, the proposed method achieved high recognition accuracy on the JAFFE, KDEF, and CK+ datasets. Specifically, under 80–20% and 10-fold cross-validation (CV) scenarios, the recognition accuracy exceeded 95%. According to the obtained results, the proposed approach demonstrates higher accuracy and robust stability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

11 pages, 1198 KB  
Article
The Effect of Upper Arch Expansion by Clear Aligners on Nasal Airway Volume in Children: A Preliminary Study
by Boyu Pan, Delaney MacIntosh, Rabia Njie, Adelaide Lui, Lindsey Westover and Tarek El-Bialy
Appl. Sci. 2025, 15(4), 2134; https://doi.org/10.3390/app15042134 - 18 Feb 2025
Viewed by 1942
Abstract
Adjustments to the anatomy of the facial region, such as maxillary expansion, may impact the geometry of the nasal airway and may increase nasal airway volume. The purpose of this study was to investigate the possible effect of maxillary dentoalveolar expansion using clear [...] Read more.
Adjustments to the anatomy of the facial region, such as maxillary expansion, may impact the geometry of the nasal airway and may increase nasal airway volume. The purpose of this study was to investigate the possible effect of maxillary dentoalveolar expansion using clear aligners on the nasal airway’s volume and intermolar distance in pediatric patients. Before and after maxillary expansion treatment using clear aligners, cone-beam computed tomography (CBCT) radiographs were taken as part of the diagnostic and progress records of 11 children (6–13 years) with constricted maxilla (the experimental group). The CBCT scans of 7 children (7–12 years) who had no treatment were considered to be the control group. The changes in nasal airway volume and intermolar distance between the experimental and control groups were compared and analyzed. Correlation analysis between nasal airway volume and intermolar distance changes was also performed. Compared with the control group, the nasal airway volume of the patients in the experimental group showed a significant increase (1595.6 ± 804.1 mm3; p < 0.001), and the intermolar distance also increased significantly (2.4 ± 0.4 mm; p < 0.001). However, there was little correlation between the change in intermolar distance and the change in nasal airway volume in the experimental group (r = −0.029) and a negative correlation in the control group (r = −0.768). This study showed increased maxillary intermolar width and increased nasal airway volume in children with constricted maxilla who underwent orthodontic maxillary expansion using clear aligners. Further studies with larger sample sizes and long follow-ups are needed. Due to the study design and small sample size, the results should be interpreted with caution and no causal relationship can be drawn between maxillary expansion using clear aligners and obstructive sleep apnea. Full article
(This article belongs to the Special Issue Applications of Digital Dental Technology in Orthodontics)
Show Figures

Figure 1

41 pages, 1802 KB  
Review
A Systematic Review of CNN Architectures, Databases, Performance Metrics, and Applications in Face Recognition
by Andisani Nemavhola, Colin Chibaya and Serestina Viriri
Information 2025, 16(2), 107; https://doi.org/10.3390/info16020107 - 5 Feb 2025
Cited by 5 | Viewed by 6724
Abstract
This study provides a comparative evaluation of face recognition databases and Convolutional Neural Network (CNN) architectures used in training and testing face recognition systems. The databases span from early datasets like Olivetti Research Laboratory (ORL) and Facial Recognition Technology (FERET) to more recent [...] Read more.
This study provides a comparative evaluation of face recognition databases and Convolutional Neural Network (CNN) architectures used in training and testing face recognition systems. The databases span from early datasets like Olivetti Research Laboratory (ORL) and Facial Recognition Technology (FERET) to more recent collections such as MegaFace and Ms-Celeb-1M, offering a range of sizes, subject diversity, and image quality. Older databases, such as ORL and FERET, are smaller and cleaner, while newer datasets enable large-scale training with millions of images but pose challenges like inconsistent data quality and high computational costs. The study also examines CNN architectures, including FaceNet and Visual Geometry Group 16 (VGG16), which show strong performance on large datasets like Labeled Faces in the Wild (LFW) and VGGFace, achieving accuracy rates above 98%. In contrast, earlier models like Support Vector Machine (SVM) and Gabor Wavelets perform well on smaller datasets but lack scalability for larger, more complex datasets. The analysis highlights the growing importance of multi-task learning and ensemble methods, as seen in Multi-Task Cascaded Convolutional Networks (MTCNNs). Overall, the findings emphasize the need for advanced algorithms capable of handling large-scale, real-world challenges while optimizing accuracy and computational efficiency in face recognition systems. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

40 pages, 20840 KB  
Article
Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach
by João Marcelo Silva Souza, Caroline da Silva Morais Alves, Jés de Jesus Fiais Cerqueira, Wagner Luiz Alves de Oliveira, Orlando Mota Pires, Naiara Silva Bonfim dos Santos, Andre Brasil Vieira Wyzykowski, Oberdan Rocha Pinheiro, Daniel Gomes de Almeida Filho, Marcelo Oliveira da Silva and Josiane Dantas Viana Barbosa
Electronics 2024, 13(24), 4867; https://doi.org/10.3390/electronics13244867 - 10 Dec 2024
Cited by 1 | Viewed by 1540
Abstract
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, [...] Read more.
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while temporal challenges involve discontinuities in motion observation due to high variability in poses and dynamic conditions such as rotation and translation. To enhance the analytical precision and validation reliability of FER systems, several datasets have been proposed. However, most of these datasets focus primarily on spatial characteristics, rely on static images, or consist of short videos captured in highly controlled environments. These constraints significantly reduce the applicability of such systems in real-world scenarios. This paper proposes the Facial Biosignals Time–Series Dataset (FBioT), a novel dataset providing temporal descriptors and features extracted from common videos recorded in uncontrolled environments. To automate dataset construction, we propose Visual–Temporal Facial Expression Recognition (VT-FER), a method that stabilizes temporal effects using normalized measurements based on the principles of the Facial Action Coding System (FACS) and generates signature patterns of expression movements for correlation with real-world temporal events. To demonstrate feasibility, we applied the method to create a pilot version of the FBioT dataset. This pilot resulted in approximately 10,000 s of public videos captured under real-world facial motion conditions, from which we extracted 22 direct and virtual metrics representing facial muscle deformations. During this process, we preliminarily labeled and qualified 3046 temporal events representing two emotion classes. As a proof of concept, these emotion classes were used as input for training neural networks, with results summarized in this paper and available in an open-source online repository. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 5414 KB  
Article
Evaluation of Mechanical Properties of ABS-like Resin for Stereolithography Versus ABS for Fused Deposition Modeling in Three-Dimensional Printing Applications for Odontology
by Victor Paes Dias Gonçalves, Carlos Maurício Fontes Vieira, Noan Tonini Simonassi, Felipe Perissé Duarte Lopes, George Youssef and Henry A. Colorado
Polymers 2024, 16(20), 2921; https://doi.org/10.3390/polym16202921 - 17 Oct 2024
Cited by 7 | Viewed by 3091
Abstract
This study investigates the differences in mechanical properties between acrylonitrile butadiene styrene (ABS) samples produced using fused deposition modeling (FDM) and stereolithography (SLA) using ABS filaments and ABS-like resin, respectively. The central question is to determine how these distinct printing techniques affect the [...] Read more.
This study investigates the differences in mechanical properties between acrylonitrile butadiene styrene (ABS) samples produced using fused deposition modeling (FDM) and stereolithography (SLA) using ABS filaments and ABS-like resin, respectively. The central question is to determine how these distinct printing techniques affect the properties of ABS and ABS-like resin and which method delivers superior performance for specific applications, particularly in dental treatments. The evaluation methods used in this study included Shore D hardness, accelerated aging, tensile testing, Izod impact testing, flexural resistance measured by a 3-point bending test, and compression testing. Poisson’s ratio was also assessed, along with microstructure characterization, density measurement, confocal microscopy, dilatometry, wettability, Fourier-transform infrared spectroscopy (FTIR), and nanoindentation. It was concluded that ABS has the same hardness in both manufacturing methods; however, the FDM process results in significantly superior mechanical properties compared to SLA. Microscopy demonstrates a more accurate sample geometry when fabricated with SLA. It is also concluded that printable ABS is suitable for applications in dentistry to fabricate models and surgical guides using the SLA and FDM methods, as well as facial protectors for sports using the FDM method. Full article
(This article belongs to the Special Issue Resins for Additive Manufacturing)
Show Figures

Figure 1

13 pages, 14978 KB  
Article
Lester: Rotoscope Animation through Video Object Segmentation and Tracking
by Ruben Tous
Algorithms 2024, 17(8), 330; https://doi.org/10.3390/a17080330 - 30 Jul 2024
Cited by 1 | Viewed by 2310
Abstract
This article introduces Lester, a novel method to automatically synthesize retro-style 2D animations from videos. The method approaches the challenge mainly as an object segmentation and tracking problem. Video frames are processed with the Segment Anything Model (SAM) and the resulting masks are [...] Read more.
This article introduces Lester, a novel method to automatically synthesize retro-style 2D animations from videos. The method approaches the challenge mainly as an object segmentation and tracking problem. Video frames are processed with the Segment Anything Model (SAM) and the resulting masks are tracked through subsequent frames with DeAOT, a method of hierarchical propagation for semi-supervised video object segmentation. The geometry of the masks’ contours is simplified with the Douglas–Peucker algorithm. Finally, facial traits, pixelation and a basic rim light effect can be optionally added. The results show that the method exhibits an excellent temporal consistency and can correctly process videos with different poses and appearances, dynamic shots, partial shots and diverse backgrounds. The proposed method provides a more simple and deterministic approach than diffusion models based video-to-video translation pipelines, which suffer from temporal consistency problems and do not cope well with pixelated and schematic outputs. The method is also more feasible than techniques based on 3D human pose estimation, which require custom handcrafted 3D models and are very limited with respect to the type of scenes they can process. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

25 pages, 5122 KB  
Article
Human Emotion Recognition Based on Spatio-Temporal Facial Features Using HOG-HOF and VGG-LSTM
by Hajar Chouhayebi, Mohamed Adnane Mahraz, Jamal Riffi, Hamid Tairi and Nawal Alioua
Computers 2024, 13(4), 101; https://doi.org/10.3390/computers13040101 - 16 Apr 2024
Cited by 5 | Viewed by 3183
Abstract
Human emotion recognition is crucial in various technological domains, reflecting our growing reliance on technology. Facial expressions play a vital role in conveying and preserving human emotions. While deep learning has been successful in recognizing emotions in video sequences, it struggles to effectively [...] Read more.
Human emotion recognition is crucial in various technological domains, reflecting our growing reliance on technology. Facial expressions play a vital role in conveying and preserving human emotions. While deep learning has been successful in recognizing emotions in video sequences, it struggles to effectively model spatio-temporal interactions and identify salient features, limiting its accuracy. This research paper proposed an innovative algorithm for facial expression recognition which combined a deep learning algorithm and dynamic texture methods. In the initial phase of this study, facial features were extracted using the Visual-Geometry-Group (VGG19) model and input into Long-Short-Term-Memory (LSTM) cells to capture spatio-temporal information. Additionally, the HOG-HOF descriptor was utilized to extract dynamic features from video sequences, capturing changes in facial appearance over time. Combining these models using the Multimodal-Compact-Bilinear (MCB) model resulted in an effective descriptor vector. This vector was then classified using a Support Vector Machine (SVM) classifier, chosen for its simpler interpretability compared to deep learning models. This choice facilitates better understanding of the decision-making process behind emotion classification. In the experimental phase, the fusion method outperformed existing state-of-the-art methods on the eNTERFACE05 database, with an improvement margin of approximately 1%. In summary, the proposed approach exhibited superior accuracy and robust detection capabilities. Full article
Show Figures

Figure 1

Back to TopTop