Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = facial muscle tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3563 KB  
Article
Implementation of an Interactive Clinical Simulator Based on Facial Anatomy: An Enhanced Model for Injection Training
by Ji-Young Son, Sang-Chul Choi, Hyeong-Seok Choi, Il Kim, Byeong-Ha Kim, Donghun Yang and Seung-Ho Han
Appl. Sci. 2025, 15(24), 13047; https://doi.org/10.3390/app152413047 - 11 Dec 2025
Viewed by 840
Abstract
Minimally invasive facial procedures are widely performed in clinical medicine but remain associated with severe complications such as necrosis or blindness, often resulting from insufficient anatomical understanding and limited procedural training. To address these challenges, this study developed an anatomically accurate clinical simulator [...] Read more.
Minimally invasive facial procedures are widely performed in clinical medicine but remain associated with severe complications such as necrosis or blindness, often resulting from insufficient anatomical understanding and limited procedural training. To address these challenges, this study developed an anatomically accurate clinical simulator for facial injection training. A three-dimensional polygonal facial model was constructed using standardized anatomical datasets reflecting skeletal dimensions, soft tissue characteristics and the average arterial distribution of East Asian faces. This model was integrated into simulation software connected to a facial silicone dummy with realistic tissue texture and an optical tracking system providing sub-millimeter precision. Each anatomical structure, including muscles, vessels and nerves, was digitally annotated and linked to interactive visualization tools. During training, the simulator simultaneously reflected the real-time needle trajectory and insertion depth; when the needle tip approached a high-risk structure, such as the supraorbital artery, alerts were automatically triggered. This feedback enabled trainees to recognize unsafe injection zones and adjust their technique accordingly. The system provided a realistic, repeatable and safe environment for improving anatomical comprehension and procedural accuracy. This study proposes an innovative applied simulation system that may enhance medical education and clinical safety in facial injection procedures. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

23 pages, 28831 KB  
Article
Micro-Expression-Based Facial Analysis for Automated Pain Recognition in Dairy Cattle: An Early-Stage Evaluation
by Shuqiang Zhang, Kashfia Sailunaz and Suresh Neethirajan
AI 2025, 6(9), 199; https://doi.org/10.3390/ai6090199 - 22 Aug 2025
Cited by 1 | Viewed by 2640
Abstract
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm [...] Read more.
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm triage. Although earlier systems tracked whole-body posture or static grimace scales, frame-level detection of facial micro-expressions has not been explored fully in livestock. We translate micro-expression analytics from automotive driver monitoring to the barn, linking modern computer vision with veterinary ethology. Our two-stage pipeline first detects faces and 30 landmarks using a custom You Only Look Once (YOLO) version 8-Pose network, achieving a 96.9% mean average precision (mAP) at an Intersection over the Union (IoU) threshold of 0.50 for detection and 83.8% Object Keypoint Similarity (OKS) for keypoint placement. Cropped eye, ear, and muzzle patches are encoded using a pretrained MobileNetV2, generating 3840-dimensional descriptors that capture millisecond muscle twitches. Sequences of five consecutive frames are fed into a 128-unit Long Short-Term Memory (LSTM) classifier that outputs pain probabilities. On a held-out validation set of 1700 frames, the system records 99.65% accuracy and an F1-score of 0.997, with only three false positives and three false negatives. Tested on 14 unseen barn videos, it attains 64.3% clip-level accuracy (i.e., overall accuracy for the whole video clip) and 83% precision for the pain class, using a hybrid aggregation rule that combines a 30% mean probability threshold with micro-burst counting to temper false alarms. As an early exploration from our proof-of-concept study on a subset of our custom dairy farm datasets, these results show that micro-expression mining can deliver scalable, non-invasive pain surveillance across variations in illumination, camera angle, background, and individual morphology. Future work will explore attention-based temporal pooling, curriculum learning for variable window lengths, domain-adaptive fine-tuning, and multimodal fusion with accelerometry on the complete datasets to elevate the performance toward clinical deployment. Full article
Show Figures

Figure 1

14 pages, 2219 KB  
Article
Digital Image Speckle Correlation (DISC): Facial Muscle Tracking for Neurological and Psychiatric Disorders
by Shi Fu, Pawel Polak, Susan Fiore, Justin N. Passman, Raphael Davis, Lucian M. Manu and Miriam Rafailovich
Diagnostics 2025, 15(13), 1574; https://doi.org/10.3390/diagnostics15131574 - 20 Jun 2025
Cited by 1 | Viewed by 1339
Abstract
Background/Objectives: Quantitative assessments of facial muscle function and cognitive responses can enhance the clinic evaluations in neuromuscular disorders such as Bell’s palsy and psychiatric conditions including anxiety and depression. This study explored the application of Digital Image Speckle Correlation (DISC) in detecting [...] Read more.
Background/Objectives: Quantitative assessments of facial muscle function and cognitive responses can enhance the clinic evaluations in neuromuscular disorders such as Bell’s palsy and psychiatric conditions including anxiety and depression. This study explored the application of Digital Image Speckle Correlation (DISC) in detecting enervation of facial musculature and assessing reaction times in response to visual stimuli. Methods: A consistent video recording setup was used to capture facial movements of human subjects in response to visual stimuli from a calibrated database. The DISC method utilizes the displacement of naturally occurring skin pores to map the specific locus of underlying muscular movement. The technique was applied to two distinct case studies: Patient 1 had unilateral Bell’s palsy and was monitored for 1 month of recovery. Patient 2 had a comorbidity of refractory depression and anxiety disorders with ketamine treatment and was assessed over 3 consecutive weekly visits. For patient 1, facial asymmetry was calculated by comparing left-to-right displacement signals. For patient 2, visual reaction time was measured, and facial motion intensity and response rate were compared with self-reported depression and anxiety scales. Results: DISC effectively mapped biomechanical properties of facial motions, providing detailed spatial and temporal resolution of muscle activity. In a control cohort of 10 subjects, when executing a facial expression, the degree of left/right facial asymmetry was determined to be 13.2 (8)%. And showed a robust response in an average of 275 (81) milliseconds to five out of the five images shown. For patient 1, obtained an initial asymmetry of nearly 100%, which decreased steadily to 20% in one month, demonstrating a progressive recovery. Patient 2 exhibited a prolonged reaction time of 518 (93) milliseconds and reduced response rates compared with controls of 275 (81) milliseconds and a decrease in the overall rate of response relative to the control group. The data obtained before treatment in three visits correlated strongly with selected depression and anxiety scores. Conclusions: These findings highlight the utility of DISC in enhancing clinical monitoring, complementing traditional examinations and self-reported measures. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

8 pages, 4005 KB  
Article
Anatomical Study of the Superficial Musculoaponeurotic System in Relation to the Zygomaticus Major
by Hyun-Jin Park and Mi-Sun Hur
Diagnostics 2024, 14(18), 2066; https://doi.org/10.3390/diagnostics14182066 - 18 Sep 2024
Cited by 1 | Viewed by 4513
Abstract
Background: The superficial musculoaponeurotic system (SMAS) is crucial for the structural integrity and dynamics of facial expressions and is a particularly important consideration during facelift surgeries. This study investigated the anatomical structure and continuity of the SMAS at the site where the zygomaticus [...] Read more.
Background: The superficial musculoaponeurotic system (SMAS) is crucial for the structural integrity and dynamics of facial expressions and is a particularly important consideration during facelift surgeries. This study investigated the anatomical structure and continuity of the SMAS at the site where the zygomaticus major (Zmj) originates, which is where the SMAS extends from the lateral to the anterior aspects of the face. Knowledge of these aspects is crucial for understanding the mechanics of facial movements and also the aging process. Methods: Dissections of 66 specimens and histological analyses were used to explore the intricate relationships and attachments between the SMAS and facial muscles. Results: The findings indicated that at the Zmj origin site, the SMAS—connected to the inferior margin of the orbicularis oculi—covered the superficial surface of the Zmj fibers. As it tracked downward, the SMAS was observed to split into two layers lateral to the Zmj fibers, enveloping them both superficially and deeply. Additionally, as the SMAS continued forward, it ceased to be distinctly visible in the buccal area. Conclusions: These results provide a deeper understanding of the complex layering and interconnectivity of the SMAS, which supports facial dynamics and structural integrity. This information could be particularly useful in surgical and aesthetic procedures in the midfacial area. Full article
(This article belongs to the Special Issue Advances in Anatomy—Third Edition)
Show Figures

Figure 1

19 pages, 1902 KB  
Article
A Fusion Algorithm Based on a Constant Velocity Model for Improving the Measurement of Saccade Parameters with Electrooculography
by Palpolage Don Shehan Hiroshan Gunawardane, Raymond Robert MacNeil, Leo Zhao, James Theodore Enns, Clarence Wilfred de Silva and Mu Chiao
Sensors 2024, 24(2), 540; https://doi.org/10.3390/s24020540 - 15 Jan 2024
Cited by 3 | Viewed by 3264
Abstract
Electrooculography (EOG) serves as a widely employed technique for tracking saccadic eye movements in a diverse array of applications. These encompass the identification of various medical conditions and the development of interfaces facilitating human–computer interaction. Nonetheless, EOG signals are often met with skepticism [...] Read more.
Electrooculography (EOG) serves as a widely employed technique for tracking saccadic eye movements in a diverse array of applications. These encompass the identification of various medical conditions and the development of interfaces facilitating human–computer interaction. Nonetheless, EOG signals are often met with skepticism due to the presence of multiple sources of noise interference. These sources include electroencephalography, electromyography linked to facial and extraocular muscle activity, electrical noise, signal artifacts, skin-electrode drifts, impedance fluctuations over time, and a host of associated challenges. Traditional methods of addressing these issues, such as bandpass filtering, have been frequently utilized to overcome these challenges but have the associated drawback of altering the inherent characteristics of EOG signals, encompassing their shape, magnitude, peak velocity, and duration, all of which are pivotal parameters in research studies. In prior work, several model-based adaptive denoising strategies have been introduced, incorporating mechanical and electrical model-based state estimators. However, these approaches are really complex and rely on brain and neural control models that have difficulty processing EOG signals in real time. In this present investigation, we introduce a real-time denoising method grounded in a constant velocity model, adopting a physics-based model-oriented approach. This approach is underpinned by the assumption that there exists a consistent rate of change in the cornea-retinal potential during saccadic movements. Empirical findings reveal that this approach remarkably preserves EOG saccade signals, resulting in a substantial enhancement of up to 29% in signal preservation during the denoising process when compared to alternative techniques, such as bandpass filters, constant acceleration models, and model-based fusion methods. Full article
Show Figures

Figure 1

31 pages, 391 KB  
Review
On the Cranial Nerves
by Hugo M. Libreros-Jiménez, Jorge Manzo, Fausto Rojas-Durán, Gonzalo E. Aranda-Abreu, Luis I. García-Hernández, Genaro A. Coria-Ávila, Deissy Herrera-Covarrubias, César A. Pérez-Estudillo, María Rebeca Toledo-Cárdenas and María Elena Hernández-Aguilar
NeuroSci 2024, 5(1), 8-38; https://doi.org/10.3390/neurosci5010002 - 28 Dec 2023
Cited by 9 | Viewed by 26885
Abstract
The twelve cranial nerves play a crucial role in the nervous system, orchestrating a myriad of functions vital for our everyday life. These nerves are each specialized for particular tasks. Cranial nerve I, known as the olfactory nerve, is responsible for our sense [...] Read more.
The twelve cranial nerves play a crucial role in the nervous system, orchestrating a myriad of functions vital for our everyday life. These nerves are each specialized for particular tasks. Cranial nerve I, known as the olfactory nerve, is responsible for our sense of smell, allowing us to perceive and distinguish various scents. Cranial nerve II, or the optic nerve, is dedicated to vision, transmitting visual information from the eyes to the brain. Eye movements are governed by cranial nerves III, IV, and VI, ensuring our ability to track objects and focus. Cranial nerve V controls facial sensations and jaw movements, while cranial nerve VII, the facial nerve, facilitates facial expressions and taste perception. Cranial nerve VIII, or the vestibulocochlear nerve, plays a critical role in hearing and balance. Cranial nerve IX, the glossopharyngeal nerve, affects throat sensations and taste perception. Cranial nerve X, the vagus nerve, is a far-reaching nerve, influencing numerous internal organs, such as the heart, lungs, and digestive system. Cranial nerve XI, the accessory nerve, is responsible for neck muscle control, contributing to head movements. Finally, cranial nerve XII, the hypoglossal nerve, manages tongue movements, essential for speaking, swallowing, and breathing. Understanding these cranial nerves is fundamental in comprehending the intricate workings of our nervous system and the functions that sustain our daily lives. Full article
21 pages, 3201 KB  
Article
A Convolutional Neural Network for Compound Micro-Expression Recognition
by Yue Zhao and Jiancheng Xu
Sensors 2019, 19(24), 5553; https://doi.org/10.3390/s19245553 - 16 Dec 2019
Cited by 34 | Viewed by 28247
Abstract
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and [...] Read more.
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions. Full article
(This article belongs to the Special Issue MEMS Technology Based Sensors for Human Centered Applications)
Show Figures

Figure 1

26 pages, 1395 KB  
Article
Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production
by Marco Leo, Pierluigi Carcagnì, Cosimo Distante, Pier Luigi Mazzeo, Paolo Spagnolo, Annalisa Levante, Serena Petrocchi and Flavia Lecciso
Appl. Sci. 2019, 9(21), 4542; https://doi.org/10.3390/app9214542 - 25 Oct 2019
Cited by 38 | Viewed by 5151
Abstract
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few [...] Read more.
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions. Full article
(This article belongs to the Special Issue Home Ambient Intelligent System)
Show Figures

Figure 1

Back to TopTop