Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (102)

Search Parameters:
Keywords = mobile eye tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 28830 KB  
Article
Micro-Expression-Based Facial Analysis for Automated Pain Recognition in Dairy Cattle: An Early-Stage Evaluation
by Shuqiang Zhang, Kashfia Sailunaz and Suresh Neethirajan
AI 2025, 6(9), 199; https://doi.org/10.3390/ai6090199 - 22 Aug 2025
Viewed by 218
Abstract
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm [...] Read more.
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm triage. Although earlier systems tracked whole-body posture or static grimace scales, frame-level detection of facial micro-expressions has not been explored fully in livestock. We translate micro-expression analytics from automotive driver monitoring to the barn, linking modern computer vision with veterinary ethology. Our two-stage pipeline first detects faces and 30 landmarks using a custom You Only Look Once (YOLO) version 8-Pose network, achieving a 96.9% mean average precision (mAP) at an Intersection over the Union (IoU) threshold of 0.50 for detection and 83.8% Object Keypoint Similarity (OKS) for keypoint placement. Cropped eye, ear, and muzzle patches are encoded using a pretrained MobileNetV2, generating 3840-dimensional descriptors that capture millisecond muscle twitches. Sequences of five consecutive frames are fed into a 128-unit Long Short-Term Memory (LSTM) classifier that outputs pain probabilities. On a held-out validation set of 1700 frames, the system records 99.65% accuracy and an F1-score of 0.997, with only three false positives and three false negatives. Tested on 14 unseen barn videos, it attains 64.3% clip-level accuracy (i.e., overall accuracy for the whole video clip) and 83% precision for the pain class, using a hybrid aggregation rule that combines a 30% mean probability threshold with micro-burst counting to temper false alarms. As an early exploration from our proof-of-concept study on a subset of our custom dairy farm datasets, these results show that micro-expression mining can deliver scalable, non-invasive pain surveillance across variations in illumination, camera angle, background, and individual morphology. Future work will explore attention-based temporal pooling, curriculum learning for variable window lengths, domain-adaptive fine-tuning, and multimodal fusion with accelerometry on the complete datasets to elevate the performance toward clinical deployment. Full article
Show Figures

Figure 1

20 pages, 3244 KB  
Article
SOUTY: A Voice Identity-Preserving Mobile Application for Arabic-Speaking Amyotrophic Lateral Sclerosis Patients Using Eye-Tracking and Speech Synthesis
by Hessah A. Alsalamah, Leena Alhabrdi, May Alsebayel, Aljawhara Almisned, Deema Alhadlaq, Loody S. Albadrani, Seetah M. Alsalamah and Shada AlSalamah
Electronics 2025, 14(16), 3235; https://doi.org/10.3390/electronics14163235 - 14 Aug 2025
Viewed by 267
Abstract
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disorder that progressively impairs motor and communication abilities. Globally, the prevalence of ALS was estimated at approximately 222,800 cases in 2015 and is projected to increase by nearly 70% to 376,700 cases by 2040, primarily driven [...] Read more.
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disorder that progressively impairs motor and communication abilities. Globally, the prevalence of ALS was estimated at approximately 222,800 cases in 2015 and is projected to increase by nearly 70% to 376,700 cases by 2040, primarily driven by demographic shifts in aging populations, and the lifetime risk of developing ALS is 1 in 350–420. Despite international advancements in assistive technologies, a recent national survey in Saudi Arabia revealed that 100% of ALS care providers lack access to eye-tracking communication tools, and 92% reported communication aids as inconsistently available. While assistive technologies such as speech-generating devices and gaze-based control systems have made strides in recent decades, they primarily support English speakers, leaving Arabic-speaking ALS patients underserved. This paper presents SOUTY, a cost-effective, mobile-based application that empowers ALS patients to communicate using gaze-controlled interfaces combined with a text-to-speech (TTS) feature in Arabic language, which is one of the five most widely spoken languages in the world. SOUTY (i.e., “my voice”) utilizes a personalized, pre-recorded voice bank of the ALS patient and integrated eye-tracking technology to support the formation and vocalization of custom phrases in Arabic. This study describes the full development life cycle of SOUTY from conceptualization and requirements gathering to system architecture, implementation, evaluation, and refinement. Validation included expert interviews with Human–Computer Interaction (HCI) expertise and speech pathology specialty, as well as a public survey assessing awareness and technological readiness. The results support SOUTY as a culturally and linguistically relevant innovation that enhances autonomy and quality of life for Arabic-speaking ALS patients. This approach may serve as a replicable model for developing inclusive Augmentative and Alternative Communication (AAC) tools in other underrepresented languages. The system achieved 100% task completion during internal walkthroughs, with mean phrase selection times under 5 s and audio playback latency below 0.3 s. Full article
Show Figures

Figure 1

9 pages, 2776 KB  
Proceeding Paper
Analysis of Elementary Student Engagement Patterns in Science Class Using Eye Tracking and Object Detection: Attention and Mind Wandering
by Ilho Yang and Daol Park
Eng. Proc. 2025, 103(1), 10; https://doi.org/10.3390/engproc2025103010 - 8 Aug 2025
Viewed by 319
Abstract
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task [...] Read more.
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task periods. The data were analyzed in the context of class and student behaviors. Interviews with the students enabled an understanding of their engagement patterns. The first student demonstrated an average attention ratio of 21.42% and a mind wandering ratio of 21.54%, characterized by inconsistent mind wandering and frequent off-task behaviors, resulting in low attention. In contrast, the second student showed an average attention ratio of 32.35% and a mind wandering ratio of 11.53%, maintaining consistent engagement throughout the class. While the two students exhibited differences in attention, mind wandering, and off-task behaviors, common factors influencing engagement were identified. Both students showed higher attention during active learning activities, such as experiments and inquiry tasks, while group interactions and visual/auditory stimuli supported sustained attention or transitions from mind wandering to attention. However, repetitive or passive tasks were associated with increased mind wandering. Such results highlight differences in individual engagement patterns and emphasize the value of integrating eye tracking and object detection with qualitative data, which provides a reference for tailoring educational strategies and improving learning environments. Full article
Show Figures

Figure 1

25 pages, 1318 KB  
Article
Mobile Reading Attention of College Students in Different Reading Environments: An Eye-Tracking Study
by Siwei Xu, Mingyu Xu, Qiyao Kang and Xiaoqun Yuan
Behav. Sci. 2025, 15(7), 953; https://doi.org/10.3390/bs15070953 - 14 Jul 2025
Viewed by 565
Abstract
With the widespread adoption of mobile reading across diverse scenarios, understanding environmental impacts on attention has become crucial for reading performance optimization. Building upon this premise, the study examined the impacts of different reading environments on attention during mobile reading, utilizing a mixed-methods [...] Read more.
With the widespread adoption of mobile reading across diverse scenarios, understanding environmental impacts on attention has become crucial for reading performance optimization. Building upon this premise, the study examined the impacts of different reading environments on attention during mobile reading, utilizing a mixed-methods approach that combined eye-tracking experiments with semi-structured interviews. Thirty-two college students participated in the study. Quantitative attention metrics, including total fixation duration and fixation count, were collected through eye-tracking, while qualitative data regarding perceived environmental influences were obtained through interviews. The results indicated that the impact of different environments on mobile reading attention varies significantly, as this variation is primarily attributable to environmental complexity and individual interest. Environments characterized by multisensory inputs or dynamic disturbances, such as fluctuating noise and visual motion, were found to induce greater attentional dispersion compared to monotonous, low-variation environments. Notably, more complex potential task-like disturbances (e.g., answering calls, conversations) were found to cause the greatest distraction. Moreover, stimuli aligned with an individual’s interests were more likely to divert attention compared to those that did not. These findings contribute methodological insights for optimizing mobile reading experiences across diverse environmental contexts. Full article
Show Figures

Figure 1

35 pages, 2865 KB  
Article
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
by Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, Duy Minh Ho Nguyen, Kristin Altmeyer, Sarah Malone and Daniel Sonntag
J. Eye Mov. Res. 2025, 18(4), 27; https://doi.org/10.3390/jemr18040027 - 7 Jul 2025
Viewed by 602
Abstract
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, [...] Read more.
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals. Full article
Show Figures

Figure 1

21 pages, 1696 KB  
Article
Cognitive Insights into Museum Engagement: A Mobile Eye-Tracking Study on Visual Attention Distribution and Learning Experience
by Wenjia Shi, Kenta Ono and Liang Li
Electronics 2025, 14(11), 2208; https://doi.org/10.3390/electronics14112208 - 29 May 2025
Cited by 1 | Viewed by 1219
Abstract
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of [...] Read more.
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of attention. Through a multi-phase research design, we explore the relationship between visitor gaze behavior and museum learning experiences in a real-world setting. Using three key eye movement metrics—Time to First Fixation (TFF), Average Fixation Duration (AFD), and Total Fixation Duration (TFD), we analyze the distribution of visual attention across predefined Areas of Interest (AOIs). Time to First Fixation varied substantially by element, occurring most rapidly for artifacts and most slowly for labels, while video screens showed the shortest mean latency but greatest inter-individual variability, reflecting sequential exploration and heterogeneous strategies toward dynamic versus static media. Total Fixation Duration was highest for video screens and picture panels, intermediate yet variable for artifacts and text panels, and lowest for labels, indicating that dynamic and pictorial content most effectively sustain attention. Finally, Average Fixation Duration peaked on artifacts and labels, suggesting in-depth processing of descriptive elements, and it was shortest on video screens, consistent with rapid, distributed fixations in response to dynamic media. The results provide novel insights into the spatial and contextual factors that influence visitor engagement and knowledge acquisition in museum environments. Based on these findings, we discuss strategic implications for museum research and propose practical recommendations for optimizing exhibition design to enhance visitor experience and learning outcomes. Full article
(This article belongs to the Special Issue New Advances in Human-Robot Interaction)
Show Figures

Figure 1

11 pages, 2106 KB  
Article
AI-Powered Smartphone Diagnostics for Convergence Insufficiency
by Ahmad Khatib, Shmuel Raz, Haia Nasser, Haneen Jabaly-Habib and Ilan Shimshoni
J. Clin. Transl. Ophthalmol. 2025, 3(2), 8; https://doi.org/10.3390/jcto3020008 - 22 Apr 2025
Viewed by 971
Abstract
Background: This study innovatively combines Artificial Intelligence (AI) algorithms with smartphone technology, automatically detecting the Near Point of Convergence (NPC) and diagnosing Convergence Insufficiency (CI) without the need for extra diagnostic tools and, notably, without having to rely on the subject’s vocal response, [...] Read more.
Background: This study innovatively combines Artificial Intelligence (AI) algorithms with smartphone technology, automatically detecting the Near Point of Convergence (NPC) and diagnosing Convergence Insufficiency (CI) without the need for extra diagnostic tools and, notably, without having to rely on the subject’s vocal response, marking an unprecedented approach in the field to the best of our knowledge. Methods: This was a prospective study that enrolled 86 participants. The real-time tracking of eye structures and movements was conducted using AI technologies integrated with a mobile application (MobileS). Participants brought the smartphone closer, focusing on a target displayed on the screen. The system calculated pupillary distance (PD) and phone-to-face distance, incorporating a unique feature called the exodeviation episode’s counter (ExoCounter) to determine the NPC. Additionally, participants underwent testing using the RAF Ruler test (RulerT), considering the ground truth. Results: MobileS demonstrated significant correlation with the RulerT, as evidenced by a Pearson correlation coefficient of 0.74 (p < 0.001) and an Intraclass Correlation Coefficient (ICC) of 0.73 (p < 0.001), highlighting its reliability and consistency with conventional ophthalmic testing. Additionally, the system exhibited notable sensitivity and specificity in diagnosing CI. Notably, user feedback indicated a preference for the MobileS, with 71% of participants favouring it for its ease of use and comfort. Conclusions: MobileS is a precise, user-friendly tool for independent NPC measurement, applicable in tele-ophthalmology and home-based care. Its versatility extends beyond CI diagnosis, marking a significant advancement in ophthalmic diagnostics for accessible and efficient eye care. Full article
(This article belongs to the Special Issue Augmented and Artificial Intelligence in Ophthalmology)
Show Figures

Figure 1

26 pages, 1003 KB  
Systematic Review
From Gaze to Game: A Systematic Review of Eye-Tracking Applications in Basketball
by Michela Alemanno, Ilaria Di Pompeo, Martina Marcaccio, Daniele Canini, Giuseppe Curcio and Simone Migliore
Brain Sci. 2025, 15(4), 421; https://doi.org/10.3390/brainsci15040421 - 20 Apr 2025
Cited by 1 | Viewed by 1066
Abstract
Background/Objectives: Eye-tracking technology has gained increasing attention in sports science, as it provides valuable insights into visual attention, decision-making, and motor planning. This systematic review examines the application of eye-tracking technology in basketball, highlighting its role in analyzing cognitive and perceptual strategies in [...] Read more.
Background/Objectives: Eye-tracking technology has gained increasing attention in sports science, as it provides valuable insights into visual attention, decision-making, and motor planning. This systematic review examines the application of eye-tracking technology in basketball, highlighting its role in analyzing cognitive and perceptual strategies in players, referees, and coaches. Methods: A systematic search was conducted following PRISMA guidelines. Studies published up until December 2024 were retrieved from PubMed and Web of Science using keywords related to basketball, eye tracking, and visual search. The inclusion criteria focused on studies using eye-tracking technology to assess athletes, referees, and coaches. A total of 1706 articles were screened, of which 19 met the eligibility criteria. Results: Eye-tracking studies have shown that expert basketball players exhibit longer quiet eye (QE) durations and more efficient gaze behaviors compared to novices. In high-pressure situations, skilled players maintain more stable QE characteristics, leading to better shot accuracy. Referees rely on efficient gaze strategies to make split-second decisions, although less experienced referees tend to neglect key visual cues. In coaching, eye-tracking studies suggest that guided gaze techniques improve tactical understanding in novice players but have limited effects on experienced athletes. Conclusions: Eye tracking is a powerful tool for studying cognitive and behavioral functioning in basketball, offering valuable insights for performance enhancement and training strategies. Future research should explore real-game settings using mobile eye trackers and integrate artificial intelligence to further refine gaze-based training methods. Full article
(This article belongs to the Section Neuropsychology)
Show Figures

Figure 1

15 pages, 531 KB  
Article
Differences in Gaze Behavior Between Male and Female Elite Handball Goalkeepers During Penalty Throws
by Wojciech Jedziniak, Krystian Panek, Piotr Lesiakowski, Beata Florkiewicz and Teresa Zwierko
Brain Sci. 2025, 15(3), 312; https://doi.org/10.3390/brainsci15030312 - 15 Mar 2025
Viewed by 1001
Abstract
Background: Recent research suggests that an athlete’s gaze behavior plays a significant role in expert sport performance. However, there is a lack of studies investigating sex differences in gaze behavior during technical and tactical actions. Objectives: Therefore, the purpose of this study was [...] Read more.
Background: Recent research suggests that an athlete’s gaze behavior plays a significant role in expert sport performance. However, there is a lack of studies investigating sex differences in gaze behavior during technical and tactical actions. Objectives: Therefore, the purpose of this study was to analyze the eye movements of elite female and male handball goalkeepers during penalty throws. Methods: In total, 40 handball goalkeepers participated in the study (female: n = 20; male: n = 20). Eye movements were recorded during a series of five penalty throws in real-time conditions. The number of fixations and dwell time, including quiet eye, for selected areas of interest were recorded using a mobile eye-tracking system. Results: Significant differences were found in quiet-eye duration between effective and ineffective goalkeeper interventions (females: mean difference (MD) = 92.26; p = 0.005; males: MD = 122.83; p < 0.001). Significant differences in gaze behavior between female and male handball goalkeepers were observed, specifically in the number of fixations and fixation duration on the selected areas of interest (AOIs). Male goalkeepers primarily observed the throwing upper arm AOI, the throwing forearm (MD = 15.522; p < 0.001), the throwing arm AOI (MD = 6.83; p < 0.001), and the ball (MD = 7.459; z = 3.47; p < 0.001), whereas female goalkeepers mainly observed the torso AOI (MD = 14.264; p < 0.001) and the head AOI (MD = 11.91; p < 0.001) of the throwing player. Conclusions: The results suggest that female goalkeepers’ gaze behavior is based on a relatively constant observation of body areas to recall task-specific information from memory, whilst male goalkeepers mainly observe moving objects in spatio-temporal areas. From a practical perspective, these results can be used to develop perceptual training programs tailored to athletes’ sex. Full article
(This article belongs to the Special Issue Advances in Assessment and Training of Perceptual-Motor Performance)
Show Figures

Figure 1

21 pages, 2540 KB  
Article
The Influence of the Relationship Between Landmark Symbol Types, Annotations, and Colors on Search Performance in Mobile Maps Based on Eye Tracking
by Hao Fang, Hongyun Guo, Zhangtong Song, Nai Yang, Rui Wang and Fen Guo
ISPRS Int. J. Geo-Inf. 2025, 14(3), 129; https://doi.org/10.3390/ijgi14030129 - 14 Mar 2025
Viewed by 1154
Abstract
Mobile map landmark symbols are pivotal in conveying spatial semantics and enhancing users’ perception of digital maps. This study employs a three-factor hybrid experimental design to investigate the effects of different landmark symbol types and their color associations with annotations on search performance [...] Read more.
Mobile map landmark symbols are pivotal in conveying spatial semantics and enhancing users’ perception of digital maps. This study employs a three-factor hybrid experimental design to investigate the effects of different landmark symbol types and their color associations with annotations on search performance using eye tracking methods. Utilizing the Tobii X2-60 eye tracker, 40 participants engaged in a visual search task across three symbol types (icons, indexes, and symbols) and two color conditions (consistent and inconsistent). This study also examines the impact of gender on search performance. The results indicate that INDEX, emphasizing the landmarks’ functions and key features, most effectively improve search accuracy and efficiency while demanding the least cognitive effort. In contrast, SYMBOL type characters, with clear semantics and minimal information, require less visual attention, facilitating faster preliminary processing. Additionally, cognitive style differences between genders affect these symbols’ effectiveness in visual searches. A careful selection of symbol types and color combinations can significantly enhance user interaction with mobile maps. Full article
(This article belongs to the Special Issue Spatial Information for Improved Living Spaces)
Show Figures

Figure 1

19 pages, 1902 KB  
Article
Facial Features Controlled Smart Vehicle for Disabled/Elderly People
by Yijun Hu, Ruiheng Wu, Guoquan Li, Zhilong Shen and Jin Xie
Electronics 2025, 14(6), 1088; https://doi.org/10.3390/electronics14061088 - 10 Mar 2025
Viewed by 815
Abstract
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer [...] Read more.
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer an innovative mobility solution. The smart vehicle supports three driving modes: (1) a nostril-based control system using MediaPipe to track displacement for movement commands, (2) an eye-tracking control system based on the Viola–Jones algorithm processed via an Arduino Nano board, and (3) a Bluetooth-assisted mode for caregiver intervention. Additionally, an ultrasonic sensor system ensures real-time obstacle detection and avoidance, enhancing user safety. Extensive experimental evaluations were conducted to validate the effectiveness of the system. The results indicate that the proposed vehicle achieves an 85% accuracy in nostril tracking, over 90% precision in eye direction detection, and efficient obstacle avoidance within a 1 m range. These findings demonstrate the robustness and reliability of the system in real-world applications. Compared to existing assistive mobility solutions, this vehicle offers non-invasive, cost-effective, and adaptable control mechanisms that cater to a diverse range of disabilities. By enhancing accessibility and promoting user independence, this research contributes to the development of inclusive mobility solutions for disabled and elderly individuals. Full article
(This article belongs to the Special Issue Active Mobility: Innovations, Technologies, and Applications)
Show Figures

Figure 1

12 pages, 4365 KB  
Article
Modulating Perception in Interior Architecture Through Décor: An Eye-Tracking Study of a Living Room Scene
by Weronika Wlazły and Agata Bonenberg
Buildings 2025, 15(1), 48; https://doi.org/10.3390/buildings15010048 - 26 Dec 2024
Viewed by 1783
Abstract
The visual perception of interior architecture plays a crucial role in real estate marketing, influencing the decisions of buyers, interior architects, and real estate agents. These professionals rely on personal assessments of space, often drawing from their experience of using décor to influence [...] Read more.
The visual perception of interior architecture plays a crucial role in real estate marketing, influencing the decisions of buyers, interior architects, and real estate agents. These professionals rely on personal assessments of space, often drawing from their experience of using décor to influence how interiors are perceived. While intuition may validate some approaches, this study explores an under-examined aspect of interior design using a mobile eye-tracking device. It investigates how decorative elements affect spatial perception and offers insights into how individuals visually engage with interior environments. By integrating décor into the analysis of interior architecture, this study broadens the traditional scope of the field, demonstrating how décor composition can modulate spatial perception using eye-tracking technology. Results show that effective styling can redirect attention from key architectural elements, sometimes causing them to be overlooked during the critical first moments of observation commonly known as the “first impression”. These findings have important implications for interior design practice and architectural education. Full article
Show Figures

Figure 1

19 pages, 6356 KB  
Article
An Objective Handling Qualities Assessment Framework of Electric Vertical Takeoff and Landing
by Yuhan Li, Shuguang Zhang, Yibing Wu, Sharina Kimura, Michael Zintl and Florian Holzapfel
Aerospace 2024, 11(12), 1020; https://doi.org/10.3390/aerospace11121020 - 11 Dec 2024
Cited by 1 | Viewed by 1220
Abstract
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling [...] Read more.
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling qualities remains a challenge. This paper proposed a handling qualities framework to assess eVTOL handling qualities, integrating pilot compensation, task performance, and qualitative comments. An experiment was conducted, where eye-tracking data and subjective ratings from 16 participants as they performed various Mission Task Elements (MTEs) in an eVTOL simulator were analyzed. The relationship between pilot compensation and task workload was investigated based on eye metrics. Data mining results revealed that pilots’ eye movement patterns and workload perception change when performing Mission Task Elements (MTEs) that involve aircraft deficiencies. Additionally, pupil size, pupil diameter, iris diameter, interpupillary distance, iris-to-pupil ratio, and gaze entropy are found to be correlated with both handling qualities and task workload. Furthermore, a handling qualities and pilot workload recognition model is developed based on Long-Short Term Memory (LSTM), which is subsequently trained and evaluated with experimental data, achieving an accuracy of 97%. A case study was conducted to validate the effectiveness of the proposed framework. Overall, the proposed framework addresses the limitations of the existing Handling Qualities Rating Method (HQRM), offering a more comprehensive approach to handling qualities assessment. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

21 pages, 10733 KB  
Article
CNN-Based Multi-Object Detection and Segmentation in 3D LiDAR Data for Dynamic Industrial Environments
by Danilo Giacomin Schneider  and Marcelo Ricardo Stemmer
Robotics 2024, 13(12), 174; https://doi.org/10.3390/robotics13120174 - 9 Dec 2024
Cited by 3 | Viewed by 2457
Abstract
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light [...] Read more.
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light Detection and Ranging (LiDAR) data. Our method aims to enable mobile robots to localize movable objects and their occupancy, which is crucial for safe and efficient navigation. To address the scarcity of labeled real-world datasets, a synthetic dataset based on a simulation environment is generated to train and evaluate our model. Additionally, we employ a subset of the NVIDIA r2b dataset for evaluation in the real world. Furthermore, we integrate our CNN-based detection and segmentation model into a Robot Operating System 2 (ROS2) framework, facilitating communication between mobile robots and a centralized node for data aggregation and map creation. Our experimental results demonstrate promising performance, showcasing the potential applicability of our approach in future assembly systems. While further validation with real-world data is warranted, our work contributes to advancing perception systems by proposing a solution for multi-source, multi-object tracking and mapping. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

13 pages, 1294 KB  
Proceeding Paper
IoT-Enabled Intelligent Health Care Screen System for Long-Time Screen Users
by Subramanian Vijayalakshmi, Joseph Alwin and Jayabal Lekha
Eng. Proc. 2024, 82(1), 96; https://doi.org/10.3390/ecsa-11-20364 - 25 Nov 2024
Viewed by 437
Abstract
With the rapid rise in technological advancements, health can be tracked and monitored in multiple ways. Tracking and monitoring healthcare gives the option to give precise interventions to people, enabling them to focus more on healthier lifestyles by minimising health issues concerning long [...] Read more.
With the rapid rise in technological advancements, health can be tracked and monitored in multiple ways. Tracking and monitoring healthcare gives the option to give precise interventions to people, enabling them to focus more on healthier lifestyles by minimising health issues concerning long screen time. Artificial Intelligence (AI) techniques like the Large Language Model (LLM) technology enable intelligent smart assistants to be used on mobile devices and in other cases. The proposed system uses the power of IoT and LLMs to create a virtual personal assistant for long-time screen users by monitoring their health parameters, with various sensors for the real-time monitoring of seating posture, heartbeat, stress levels, and the motion tracking of eye movements, etc., to constantly track, give necessary advice, and make sure that their vitals are as expected and within the safety parameters. The intelligent system combines the power of AI and Natural Language Processing (NLP) to build a virtual assistant embedded into the screens of mobile devices, laptops, desktops, and other screen devices, which employees across various workspaces use. The intelligent screen, with the integration of multiple sensors, tracks and monitors the users’ vitals along with various other necessary health parameters, and alerts them to take breaks, have water, and refresh, ensuring that the users stay healthy while using the system for work. These systems also suggest necessary exercises for the eyes, head, and other body parts. The proposed smart system is supported by user recognition to identify the current user and suggest advisory actions accordingly. The system also adapts and ensures that the users enjoy proper relaxation and focus when using the system, providing a flexible and personalised experience. The intelligent screen system monitors and improves the health of employees who have to work for a long time, thereby enhancing the productivity and concentration of employees in various organisations. Full article
Show Figures

Figure 1

Back to TopTop