Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (296)

Search Parameters:
Keywords = gazing behavior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 13849 KB  
Article
When Action Speaks Louder than Words: Exploring Non-Verbal and Paraverbal Features in Dyadic Collaborative VR
by Dennis Osei Tutu, Sepideh Habibiabad, Wim Van den Noortgate, Jelle Saldien and Klaas Bombeke
Sensors 2025, 25(17), 5498; https://doi.org/10.3390/s25175498 - 4 Sep 2025
Viewed by 210
Abstract
Soft skills such as communication and collaboration are vital in both professional and educational settings, yet difficult to train and assess objectively. Traditional role-playing scenarios rely heavily on subjective trainer evaluations—either in real time, where subtle behaviors are missed, or through time-intensive post [...] Read more.
Soft skills such as communication and collaboration are vital in both professional and educational settings, yet difficult to train and assess objectively. Traditional role-playing scenarios rely heavily on subjective trainer evaluations—either in real time, where subtle behaviors are missed, or through time-intensive post hoc analysis. Virtual reality (VR) offers a scalable alternative by immersing trainees in controlled, interactive scenarios while simultaneously capturing fine-grained behavioral signals. This study investigates how task design in VR shapes non-verbal and paraverbal behaviors during dyadic collaboration. We compared two puzzle tasks: Task 1, which provided shared visual access and dynamic gesturing, and Task 2, which required verbal coordination through separation and turn-taking. From multimodal tracking data, we extracted features including gaze behaviors (eye contact, joint attention), hand gestures, facial expressions, and speech activity, and compared them across tasks. A clustering analysis explored whether o not tasks could be differentiated by their behavioral profiles. Results showed that Task 2, the more constrained condition, led participants to focus more visually on their own workspaces, suggesting that interaction difficulty can reduce partner-directed attention. Gestures were more frequent in shared-visual tasks, while speech became longer and more structured when turn-taking was enforced. Joint attention increased when participants relied on verbal descriptions rather than on a visible shared reference. These findings highlight how VR can elicit distinct soft skill behaviors through scenario design, enabling data-driven analysis of collaboration. This work contributes to scalable assessment frameworks with applications in training, adaptive agents, and human-AI collaboration. Full article
(This article belongs to the Special Issue Sensing Technology to Measure Human-Computer Interactions)
Show Figures

Figure 1

24 pages, 2242 KB  
Article
Attention Allocation and Gaze Behavior While Driving: A Comparison Among Young, Middle-Aged and Elderly Drivers
by Anamarija Poll, Tomaž Tollazzi and Chiara Gruden
Sustainability 2025, 17(17), 7927; https://doi.org/10.3390/su17177927 - 3 Sep 2025
Viewed by 189
Abstract
In 2023, 95.5 million Europeans were aged over 65, falling within the definition of the “elderly population”. According to statistics, this number will rise to 129.8 million by 2050, making Europe the oldest continent in the world. One of the consequences of such [...] Read more.
In 2023, 95.5 million Europeans were aged over 65, falling within the definition of the “elderly population”. According to statistics, this number will rise to 129.8 million by 2050, making Europe the oldest continent in the world. One of the consequences of such growth is a sharp increase in the number of elderly drivers. Although they have more experience, which can positively impact road safety, their performance and health generally decline, limiting some of the physical and mental abilities required for safe vehicle control. The main objective of this research was to shed light on the behavior of elderly drivers by comparing three different drivers’ age groups: young, middle-aged and elderly drivers. Based on analysis of road accidents involving elderly drivers, the road safety situation for elderly drivers in Slovenia was highlighted, a questionnaire was developed to understand how elderly drivers perceive traffic, and an experiment was conducted where 30 volunteers were tested using a driving simulator and eye-tracking glasses. Objective driving and gaze behavior data were obtained, and very different performance was found among the three age groups, with elderly drivers having poorer reaction times and overlooking many elements compared to younger drivers. Full article
Show Figures

Figure 1

18 pages, 2897 KB  
Article
Multimodal Analyses and Visual Models for Qualitatively Understanding Digital Reading and Writing Processes
by Amanda Yoshiko Shimizu, Michael Havazelet, Blaine E. Smith and Amanda P. Goodwin
Educ. Sci. 2025, 15(9), 1135; https://doi.org/10.3390/educsci15091135 - 1 Sep 2025
Viewed by 260
Abstract
As technology continues to shape how students read and write, digital literacy practices have become increasingly multimodal and complex—posing new challenges for researchers seeking to understand these processes in authentic educational settings. This paper presents three qualitative studies that use multimodal analyses and [...] Read more.
As technology continues to shape how students read and write, digital literacy practices have become increasingly multimodal and complex—posing new challenges for researchers seeking to understand these processes in authentic educational settings. This paper presents three qualitative studies that use multimodal analyses and visual modeling to examine digital reading and writing across age groups, learning contexts, and literacy activities. The first study introduces collaborative composing snapshots, a method that visually maps third graders’ digital collaborative writing processes and highlights how young learners blend spoken, written, and visual modes in real-time online collaboration. The second study uses digital reading timescapes to track the multimodal reading behaviors of fifth graders—such as highlighting, re-reading, and gaze patterns—offering insights into how these actions unfold over time to support comprehension. The third study explores multimodal composing timescapes and transmediation visualizations to analyze how bilingual high school students compose across languages and modes, including text, image, and sounds. Together, these innovative methods illustrate the power of multimodal analysis and visual modeling for capturing the complexity of digital literacy development. They offer valuable tools for designing more inclusive, equitable, and developmentally responsive digital learning environments—particularly for culturally and linguistically diverse learners. Full article
Show Figures

Figure 1

17 pages, 3896 KB  
Article
HFGAD: Hierarchical Fine-Grained Attention Decoder for Gaze Estimation
by Shaojie Huang, Tianzhong Wang, Weiquan Liu, Yingchao Piao, Jinhe Su, Guorong Cai and Huilin Xu
Algorithms 2025, 18(9), 538; https://doi.org/10.3390/a18090538 - 24 Aug 2025
Viewed by 433
Abstract
Gaze estimation is a cornerstone of applications such as human–computer interaction and behavioral analysis, e.g., for intelligent transport systems. Nevertheless, existing methods predominantly rely on coarse-grained features from deep layers of visual encoders, overlooking the critical role that fine-grained details from shallow layers [...] Read more.
Gaze estimation is a cornerstone of applications such as human–computer interaction and behavioral analysis, e.g., for intelligent transport systems. Nevertheless, existing methods predominantly rely on coarse-grained features from deep layers of visual encoders, overlooking the critical role that fine-grained details from shallow layers play in gaze estimation. To address this gap, we propose a novel Hierarchical Fine-Grained Attention Decoder (HFGAD), a lightweight fine-grained decoder that emphasizes the importance of shallow-layer information in gaze estimation. Specifically, HFGAD integrates a fine-grained amplifier MSCSA that employs multi-scale spatial-channel attention to direct focus toward gaze-relevant regions, and also incorporates a shallow-to-deep fusion module SFM to facilitate interaction between coarse-grained and fine-grained information. Extensive experiments on three benchmark datasets demonstrate the superiority of HFGAD over existing methods, achieving a remarkable 1.13° improvement in gaze estimation accuracy for in-car scenarios. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

14 pages, 733 KB  
Article
Investigating Foreign Language Vocabulary Recognition in Children with ADHD and Autism with the Use of Eye Tracking Technology
by Georgia Andreou and Ariadni Argatzopoulou
Brain Sci. 2025, 15(8), 876; https://doi.org/10.3390/brainsci15080876 - 18 Aug 2025
Viewed by 607
Abstract
Background: Neurodivergent students, including those with Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD), frequently encounter challenges in several areas of foreign language (FL) learning, including vocabulary acquisition. This exploratory study aimed to investigate real-time English as a Foreign Language (EFL) word [...] Read more.
Background: Neurodivergent students, including those with Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD), frequently encounter challenges in several areas of foreign language (FL) learning, including vocabulary acquisition. This exploratory study aimed to investigate real-time English as a Foreign Language (EFL) word recognition using eye tracking within the Visual World Paradigm (VWP). Specifically, it examined whether gaze patterns could serve as indicators of successful word recognition, how these patterns varied across three distractor types (semantic, phonological, unrelated), and whether age and vocabulary knowledge influenced visual attention during word processing. Methods: Eye-tracking data were collected from 17 children aged 6–10 years with ADHD or ASD while they completed EFL word recognition tasks. Analyses focused on gaze metrics across target and distractor images to identify percentile-based thresholds as potential data-driven markers of recognition. Group differences (ADHD vs. ASD) and the roles of age and vocabulary knowledge were also examined. Results: Children with ADHD exhibited increased fixations on phonological distractors, indicating higher susceptibility to interference, whereas children with ASD demonstrated more distributed attention, often attracted by semantic cues. Older participants and those with higher vocabulary scores showed more efficient gaze behavior, characterized by increased fixations on target images, greater attention to relevant stimuli, and reduced attention to distractors. Conclusions: Percentile-based thresholds in gaze metrics may provide useful markers of word recognition in neurodivergent learners. Findings underscore the importance of differentiated instructional strategies in EFL education for children with ADHD and ASD. The study further supports the integration of eye tracking with behavioral assessments to advance understanding of language processing in atypical developmental contexts. Full article
Show Figures

Figure 1

23 pages, 7524 KB  
Article
Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling
by Wen-Chao Yang, Chih-Hung Shih, Jiajun Jiang, Sergio Pallas Enguita and Chung-Hao Chen
Electronics 2025, 14(16), 3265; https://doi.org/10.3390/electronics14163265 - 17 Aug 2025
Viewed by 359
Abstract
Understanding human perceptual strategies in high-stakes environments, such as crime scene investigations, is essential for developing cognitive models that reflect expert decision-making. This study presents an immersive experimental framework that utilizes virtual reality (VR) and eye-tracking technologies to capture and analyze visual attention [...] Read more.
Understanding human perceptual strategies in high-stakes environments, such as crime scene investigations, is essential for developing cognitive models that reflect expert decision-making. This study presents an immersive experimental framework that utilizes virtual reality (VR) and eye-tracking technologies to capture and analyze visual attention during simulated forensic tasks. A360° panoramic crime scene, constructed using the Nikon KeyMission 360 camera, was integrated into a VR system with HTC Vive and Tobii Pro eye-tracking components. A total of 46 undergraduate students aged 19 to 24–23, from the National University of Singapore in Singapore and 23 from the Central Police University in Taiwan—participated in the study, generating over 2.6 million gaze samples (IRB No. 23-095-B). The collected eye-tracking data were analyzed using statistical summarization, temporal alignment techniques (Earth Mover’s Distance and Needleman-Wunsch algorithms), and machine learning models, including K-means clustering, random forest regression, and support vector machines (SVMs). Clustering achieved a classification accuracy of 78.26%, revealing distinct visual behavior patterns across participant groups. Proficiency prediction models reached optimal performance with a random forest regression (R2 = 0.7034), highlighting scan-path variability and fixation regularity as key predictive features. These findings demonstrate that eye-tracking metrics—particularly sequence-alignment-based features—can effectively capture differences linked to both experiential training and cultural context. Beyond its immediate forensic relevance, the study contributes a structured methodology for encoding visual attention strategies into analyzable formats, offering valuable insights for cognitive modeling, training systems, and human-centered design in future perceptual intelligence applications. Furthermore, our work advances the development of autonomous vehicles by modeling how humans visually interpret complex and potentially hazardous environments. By examining expert and novice gaze patterns during simulated forensic investigations, we provide insights that can inform the design of autonomous systems required to make rapid, safety-critical decisions in similarly unstructured settings. The extraction of human-like visual attention strategies not only enhances scene understanding, anomaly detection, and risk assessment in autonomous driving scenarios, but also supports accelerated learning of response patterns for rare, dangerous, or otherwise exceptional conditions—enabling autonomous driving systems to better anticipate and manage unexpected real-world challenges. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

13 pages, 742 KB  
Article
Recognition of Authentic Happy and Sad Facial Expressions in Chinese Elementary School Children: Evidence from Behavioral and Eye-Movement Studies
by Qin Wang, Huifang Xu, Xia Zhou, Wanjala Bakari and Huifang Gao
Behav. Sci. 2025, 15(8), 1099; https://doi.org/10.3390/bs15081099 - 13 Aug 2025
Viewed by 436
Abstract
Accurately discerning the authenticity of facial expressions is crucial for inferring others’ psychological states and behavioral intentions, particularly in shaping interpersonal trust dynamics among elementary school children. While existing literature remains inconclusive regarding school-aged children’s capability to differentiate between genuine and posed facial [...] Read more.
Accurately discerning the authenticity of facial expressions is crucial for inferring others’ psychological states and behavioral intentions, particularly in shaping interpersonal trust dynamics among elementary school children. While existing literature remains inconclusive regarding school-aged children’s capability to differentiate between genuine and posed facial expressions, this study employed happy and sad facial stimuli to systematically evaluate their discrimination accuracy. Parallel to behavioral measures, children’s gaze patterns during authenticity judgments were recorded using eye-tracking technology. Results revealed that participants demonstrated higher accuracy in identifying genuine versus posed happy expressions, whereas discrimination of sad expressions proved more challenging, especially among lower-grade students. Overall, facial expression recognition accuracy exhibited a positive correlation with grade progression, with visual attention predominantly allocated to the Eye-region. Notably, no grade-dependent differences emerged in region-specific gaze preferences. These findings suggest that school-aged children display emotion-specific recognition competencies, while improvements in accuracy operate independently of gaze strategy development. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

11 pages, 1041 KB  
Article
Evidence for Semantic Communication in Alarm Calls of Wild Sichuan Snub-Nosed Monkeys
by Fang-Jun Cao, James R. Anderson, Wei-Wei Fu, Ni-Na Gou, Jie-Na Shen, Fu-Shi Cen, Yi-Ran Tu, Min Mao, Kai-Feng Wang, Bin Yang and Bao-Guo Li
Biology 2025, 14(8), 1028; https://doi.org/10.3390/biology14081028 - 11 Aug 2025
Cited by 1 | Viewed by 479 | Correction
Abstract
The alarm calls of non-human primates help us to understand the evolution of animal vocal communication and the origin of human language. However, as there is a lack of research on alarm calls in primates living in multilevel societies, we studied these calls [...] Read more.
The alarm calls of non-human primates help us to understand the evolution of animal vocal communication and the origin of human language. However, as there is a lack of research on alarm calls in primates living in multilevel societies, we studied these calls in wild Sichuan snub-nosed monkeys. By means of playback experiments, we analyzed whether call receivers understood the meaning of the alarm calls, making appropriate behavioral responses. Results showed that receivers made appropriate and specific anti-predator responses to two types of alarm calls. After hearing the aerial predator alarm call (“GEGEGE”), receivers’ first gaze direction was usually upward (towards the sky), and upward gaze duration was longer than the last gaze before playback. After hearing the terrestrial predator alarm call (“O-GA”), the first gaze direction was usually downward (towards the ground), and this downward gaze duration was longer than the gaze before playback. These reactions provide evidence for external referentiality of alarm calls in Sichuan snub-nosed monkeys, that is, information about the type of predator or the appropriate response is encoded acoustically in the calls. Full article
(This article belongs to the Section Behavioural Biology)
Show Figures

Figure 1

9 pages, 2776 KB  
Proceeding Paper
Analysis of Elementary Student Engagement Patterns in Science Class Using Eye Tracking and Object Detection: Attention and Mind Wandering
by Ilho Yang and Daol Park
Eng. Proc. 2025, 103(1), 10; https://doi.org/10.3390/engproc2025103010 - 8 Aug 2025
Viewed by 380
Abstract
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task [...] Read more.
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task periods. The data were analyzed in the context of class and student behaviors. Interviews with the students enabled an understanding of their engagement patterns. The first student demonstrated an average attention ratio of 21.42% and a mind wandering ratio of 21.54%, characterized by inconsistent mind wandering and frequent off-task behaviors, resulting in low attention. In contrast, the second student showed an average attention ratio of 32.35% and a mind wandering ratio of 11.53%, maintaining consistent engagement throughout the class. While the two students exhibited differences in attention, mind wandering, and off-task behaviors, common factors influencing engagement were identified. Both students showed higher attention during active learning activities, such as experiments and inquiry tasks, while group interactions and visual/auditory stimuli supported sustained attention or transitions from mind wandering to attention. However, repetitive or passive tasks were associated with increased mind wandering. Such results highlight differences in individual engagement patterns and emphasize the value of integrating eye tracking and object detection with qualitative data, which provides a reference for tailoring educational strategies and improving learning environments. Full article
Show Figures

Figure 1

17 pages, 886 KB  
Article
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
by Paweł Cybulski
J. Eye Mov. Res. 2025, 18(4), 35; https://doi.org/10.3390/jemr18040035 - 7 Aug 2025
Viewed by 295
Abstract
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were [...] Read more.
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces. Full article
Show Figures

Figure 1

22 pages, 553 KB  
Article
What Drives “Group Roaming”? A Study on the Pathway of “Digital Persuasion” in Media-Constructed Landscapes Behind Chinese Conformist Travel
by Chao Zhang, Di Jin and Jingwen Li
Behav. Sci. 2025, 15(8), 1056; https://doi.org/10.3390/bs15081056 - 4 Aug 2025
Viewed by 382
Abstract
In the era of digital intelligence, digital media landscapes increasingly influence cultural tourism consumption. Consumerism capitalizes on tourists’ superficial aesthetic commonalities, constructing a homogenized media imagination that leads to collective convergence in travel decisions, which obscures aspects of local culture, poses safety risks, [...] Read more.
In the era of digital intelligence, digital media landscapes increasingly influence cultural tourism consumption. Consumerism capitalizes on tourists’ superficial aesthetic commonalities, constructing a homogenized media imagination that leads to collective convergence in travel decisions, which obscures aspects of local culture, poses safety risks, and results in fleeting local tourism booms. In this study, semistructured interviews were conducted with 36 tourists, and NVivo12.0 was used for three-level node coding in a qualitative analysis to explore the digital media attributions of conformist travel behavior. The findings indicate that digital media landscapes exert a “digital persuasion” effect by reconstructing self-experience models, directing the individual gaze, and projecting idealized self-images. These mechanisms drive tourists to follow digital traffic trends and engage in imitative behaviors, ultimately shaping the phenomenon of “group roaming”, grounded in the psychological effect of herd behavior. Full article
Show Figures

Figure 1

15 pages, 2879 KB  
Article
Study on the Eye Movement Transfer Characteristics of Drivers Under Different Road Conditions
by Zhenxiang Hao, Jianping Hu, Xiaohui Sun, Jin Ran, Yuhang Zheng, Binhe Yang and Junyao Tang
Appl. Sci. 2025, 15(15), 8559; https://doi.org/10.3390/app15158559 - 1 Aug 2025
Viewed by 351
Abstract
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, [...] Read more.
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, and downhill, were selected, and the eye movement data of 23 drivers in different driving stages were collected by aSee Glasses eye-tracking device to analyze the visual gaze characteristics of the drivers and their transfer patterns in each road section. Using Markov chain theory, the probability of staying at each gaze point and the transfer probability distribution between gaze points were investigated. The results of the study showed that drivers’ visual behaviors in different road sections showed significant differences: drivers in the turning section had the largest percentage of fixation on the near front, with a fixation duration and frequency of 29.99% and 28.80%, respectively; the straight ahead section, on the other hand, mainly focused on the right side of the road, with 31.57% of fixation duration and 19.45% of frequency of fixation; on the uphill section, drivers’ fixation duration on the left and right roads was more balanced, with 24.36% of fixation duration on the left side of the road and 25.51% on the right side of the road; drivers on the downhill section looked more frequently at the distance ahead, with a total fixation frequency of 23.20%, while paying higher attention to the right side of the road environment, with a fixation duration of 27.09%. In terms of visual fixation, the fixation shift in the turning road section was mainly concentrated between the near and distant parts of the road ahead and frequently turned to the left and right sides; the straight road section mainly showed a shift between the distant parts of the road ahead and the dashboard; the uphill road section was concentrated on the shift between the near parts of the road ahead and the two sides of the road, while the downhill road section mainly occurred between the distant parts of the road ahead and the rearview mirror. Although drivers’ fixations on the front of the road were most concentrated under the four road sections, with an overall fixation stability probability exceeding 67%, there were significant differences in fixation smoothness between different road sections. Through this study, this paper not only reveals the laws of drivers’ visual behavior under different driving environments but also provides theoretical support for behavior-based traffic safety improvement strategies. Full article
Show Figures

Figure 1

43 pages, 190510 KB  
Article
From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration
by Kuan-Chen Chen, Chang-Franw Lee, Teng-Wen Chang, Cheng-Gang Wang and Jia-Rong Li
Appl. Sci. 2025, 15(14), 7900; https://doi.org/10.3390/app15147900 - 15 Jul 2025
Viewed by 413
Abstract
This study proposes a computational framework that transforms eye-tracking analysis from statistical description to cognitive structure modeling, aiming to reveal the organizational features embedded in the viewing process. Using the designers’ observation of a traditional Chinese landscape painting as an example, the study [...] Read more.
This study proposes a computational framework that transforms eye-tracking analysis from statistical description to cognitive structure modeling, aiming to reveal the organizational features embedded in the viewing process. Using the designers’ observation of a traditional Chinese landscape painting as an example, the study draws on the goal-oriented nature of design thinking to suggest that such visual exploration may exhibit latent structural tendencies, reflected in patterns of fixation and transition. Rather than focusing on traditional fixation hotspots, our four-dimensional framework (Region, Relation, Weight, Time) treats viewing behavior as structured cognitive networks. To operationalize this framework, we developed a data-driven computational approach that integrates fixation coordinate transformation, K-means clustering, extremum point detection, and linear interpolation. These techniques identify regions of concentrated visual attention and define their spatial boundaries, allowing for the modeling of inter-regional relationships and cognitive organization among visual areas. An adaptive buffer zone method is further employed to quantify the strength of connections between regions and to delineate potential visual nodes and transition pathways. Three design-trained participants were invited to observe the same painting while performing a think-aloud task, with one participant selected for the detailed demonstration of the analytical process. The framework’s applicability across different viewers was validated through consistent structural patterns observed across all three participants, while simultaneously revealing individual differences in their visual exploration strategies. These findings demonstrate that the proposed framework provides a replicable and generalizable method for systematically analyzing viewing behavior across individuals, enabling rapid identification of both common patterns and individual differences in visual exploration. This approach opens new possibilities for discovering structural organization within visual exploration data and analyzing goal-directed viewing behaviors. Although this study focuses on method demonstration, it proposes a preliminary hypothesis that designers’ gaze structures are significantly more clustered and hierarchically organized than those of novices, providing a foundation for future confirmatory testing. Full article
(This article belongs to the Special Issue New Insights into Computer Vision and Graphics)
Show Figures

Figure 1

15 pages, 1027 KB  
Article
Parent–Child Eye Gaze Congruency to Emotional Expressions Mediated by Child Aesthetic Sensitivity
by Antonios I. Christou, Kostas Fanti, Ioannis Mavrommatis and Georgia Soursou
Children 2025, 12(7), 839; https://doi.org/10.3390/children12070839 - 25 Jun 2025
Cited by 1 | Viewed by 480
Abstract
Background/Objectives: Sensory Processing Sensitivity (SPS), particularly its aesthetic subcomponent (Aesthetic Sensitivity; AES), has been linked to individual differences in emotional processing. This study examined whether parental visual attention to emotional facial expressions predicts corresponding attentional patterns in their children, and whether this intergenerational [...] Read more.
Background/Objectives: Sensory Processing Sensitivity (SPS), particularly its aesthetic subcomponent (Aesthetic Sensitivity; AES), has been linked to individual differences in emotional processing. This study examined whether parental visual attention to emotional facial expressions predicts corresponding attentional patterns in their children, and whether this intergenerational concordance is mediated by child AES and moderated by child empathy. Methods: A sample of 124 Greek Cypriot parent–child dyads (children aged 7–12 years) participated in an eye-tracking experiment. Both parents and children viewed static emotional facial expressions (angry, sad, fearful, happy). Parents also completed questionnaires assessing their child’s SPS, empathy (cognitive and affective), and emotional functioning. Regression analyses and moderated mediation models were employed to explore associations between parental and child gaze patterns. Results: Children’s fixation on angry eyes was significantly predicted by parental fixation duration on the same region, as well as by child AES and empathy levels. Moderated mediation analyses revealed that the association between parent and child gaze to angry eyes was significantly mediated by child AES. However, neither cognitive nor affective empathy significantly moderated this mediation effect. Conclusions: Findings suggest that child AES plays a key mediating role in the intergenerational transmission of attentional biases to emotional stimuli. While empathy was independently associated with children’s gaze behavior, it did not moderate the AES-mediated pathway. These results highlight the importance of trait-level child sensitivity in shaping shared emotional attention patterns within families. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

38 pages, 1275 KB  
Review
Ins and Outs of Applied Behavior Analysis (ABA) Intervention in Promoting Social Communicative Abilities and Theory of Mind in Children and Adolescents with ASD: A Systematic Review
by Marco Esposito, Roberta Fadda, Orlando Ricciardi, Paolo Mirizzi, Monica Mazza and Marco Valenti
Behav. Sci. 2025, 15(6), 814; https://doi.org/10.3390/bs15060814 - 13 Jun 2025
Viewed by 3683
Abstract
Social-communicative abilities and theory of mind (ToM) are crucial for successful social interactions. The developmental trajectories of social and communicative skills characterizing individuals with Autism Spectrum Disorder (ASD) are rather complex and multidimensional, including components related to theory of mind. Due to its [...] Read more.
Social-communicative abilities and theory of mind (ToM) are crucial for successful social interactions. The developmental trajectories of social and communicative skills characterizing individuals with Autism Spectrum Disorder (ASD) are rather complex and multidimensional, including components related to theory of mind. Due to its mentalistic nature, theory of mind has been rarely addressed as an outcome for Applied Behavior Analysis (ABA) intervention in children and adolescents with ASD. However, there is evidence that ABA intervention might be effective in promoting social abilities in individuals with ASD. Thus, this topic is worth investigating. We present a systematic review to explore the Ins and Outs of an ABA approach to promote social and communicative abilities and ToM in children and adolescents with ASD. We applied a PRISMA checklist to consider studies published up to December 2024. The keywords that we used were ToM, perspective-taking, false belief, social cognition, and mental states, in combination with ABA intervention and ASD (up to age 18). We searched for studies using Scopus, Google Scholar, and Medline. We included twenty studies on perspective-taking, identifying emotions, helping, detecting eye gazing, and social engagement, reviewing fifteen dedicated to teaching the interpretation of mental states (involving 49 children and 10 adolescents). The ToM was addressed with a multiple baseline design on target behaviors associated with ToM components such as identifying emotion, helping behaviors, and mental states. The intervention included a behavioral package consisting of Behavioral Skill Training, Derived Relations, video modeling, and role playing. The results indicated a significant number of participants who followed ABA intervention to promote social abilities and mastered the target behavior in ToM tasks; however, they showed maintenance and generalization issues across trials and settings. The role of predictors was highlighted. However, the studies are still rare and exhibit specific methodological limitations, as well as some clinical and ethical considerations. More research is needed to define best practices in ABA intervention to promote social abilities in individuals with ASD. Full article
Show Figures

Figure 1

Back to TopTop