Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = ambient/focal attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1779 KB  
Article
Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles
by Hussein Abu-Rayyash and Isabel Lacruz
J. Eye Mov. Res. 2025, 18(4), 29; https://doi.org/10.3390/jemr18040029 - 14 Jul 2025
Viewed by 642
Abstract
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic [...] Read more.
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic subtitles impose with that of professional human translations among 82 native Arabic speakers who viewed a 10 min episode (“Syria”) from the BBC comedy drama series State of the Union. Participants were randomly assigned to view the same episode with either professionally produced Arabic subtitles (Amazon Prime’s human translations) or machine-generated GPT-4o Arabic subtitles. In a between-subjects design, with English proficiency entered as a moderator, we collected fixation count, mean fixation duration, gaze distribution, and attention concentration (K-coefficient) as indices of cognitive processing. GPT-4o subtitles raised cognitive load on every metric; viewers produced 48% more fixations in the subtitle area, recorded 56% longer fixation durations, and spent 81.5% more time reading the automated subtitles than the professional subtitles. The subtitle area K-coefficient tripled (0.10 to 0.30), a shift from ambient scanning to focal processing. Viewers with advanced English proficiency showed the largest disruptions, which indicates that higher linguistic competence increases sensitivity to subtle translation shortcomings. These results challenge claims that large language models (LLMs) lighten viewer burden; despite fluent surface quality, GPT-4o subtitles demand far more cognitive resources than expert human subtitles and therefore reinforce the need for human oversight in audiovisual translation (AVT) and media accessibility. Full article
Show Figures

Figure 1

19 pages, 3586 KB  
Article
Effect of Stimulus Regularities on Eye Movement Characteristics
by Bilyana Genova, Nadejda Bocheva and Ivan Hristov
Appl. Sci. 2024, 14(21), 10055; https://doi.org/10.3390/app142110055 - 4 Nov 2024
Viewed by 1270
Abstract
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence [...] Read more.
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence and the presence of common chunks in visual images on eye movement characteristics like the fixation duration, saccade amplitude and number, and gaze number across sequential experimental epochs. The participants had to discriminate the patterns presented in pairs as the same or different. The order of pairs was repeated six times. Our results show an increase in fixation duration and a decrease in saccade amplitude in the sequential epochs, suggesting a transition from ambient to focal information processing as participants acquire knowledge. This transition indicates deeper cognitive engagement and extended analysis of the stimulus information. Interestingly, contrary to our expectations, the saccade number increased, and the gaze number decreased. These unexpected results might imply a reduction in the memory load and a narrowing of attentional focus when the relevant stimulus characteristics are already determined. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

17 pages, 3792 KB  
Article
Adaptation of Object Detection Algorithms for Road Indicator Lights in Complex Scenes
by Ziyang Yao, Zunhao Hu, Peng Tian and Jun Sun
Appl. Sci. 2024, 14(21), 10012; https://doi.org/10.3390/app142110012 - 2 Nov 2024
Cited by 1 | Viewed by 1429
Abstract
In the realm of autonomous driving, practical driving scenarios are fraught with numerous complexities, including inclement weather conditions, nighttime blurriness, and ambient light sources that significantly hinder drivers’ ability to discern road indicators. Furthermore, the dynamic nature of road indicators, which are constantly [...] Read more.
In the realm of autonomous driving, practical driving scenarios are fraught with numerous complexities, including inclement weather conditions, nighttime blurriness, and ambient light sources that significantly hinder drivers’ ability to discern road indicators. Furthermore, the dynamic nature of road indicators, which are constantly evolving, poses additional challenges for computer vision-based detection systems. To address these issues, this paper introduces a road indicator light detection model, leveraging the advanced capabilities of YOLOv8. We have ingeniously integrated the robust backbone of YOLOv8 with four distinct attention mechanism modules—Convolutional Block Attention Module (CBAM), Efficient Channel Attention (ECA), Shuffle Attention (SA), and Global Attention Mechanism (GAM)—to significantly enhance the model performance in capturing nuanced features of road indicators and boosting the accuracy of detecting minute objects. Additionally, we employ the Asymptotic Feature Pyramid Network (AFPN) strategy, which optimizes the fusion of features across different scales, ensuring not only an enhanced performance but also maintaining real-time capability. These innovative attention modules empower the model by recalibrating the significance of both channel and spatial dimensions within the feature maps, enabling it to hone in on the most pertinent object characteristics. To tackle the challenges posed by samples rich in small, occluded, background-similar objects, and those that are inherently difficult to recognize, we have incorporated the Focaler-IOU loss function. This loss function deftly reduces the contribution of easily detectable samples to the overall loss, thereby intensifying the focus on challenging samples. This strategic balancing of hard-to-detect versus easy-to-detect samples effectively elevates the model’s detection performance. Experimental evaluations conducted on both a public traffic signal dataset and a proprietary headlight dataset have yielded impressive results, with both mAP50 and mAP50:95 metrics experiencing significant improvements exceeding two percentage points. Notably, the enhancements observed in the headlight dataset are particularly profound, signifying a significant step forward toward realizing safer and more reliable assisted driving technologies. Full article
Show Figures

Figure 1

23 pages, 9763 KB  
Article
Attention-Enhanced Urban Fugitive Dust Source Segmentation in High-Resolution Remote Sensing Images
by Xiaoqing He, Zhibao Wang, Lu Bai, Meng Fan, Yuanlin Chen and Liangfu Chen
Remote Sens. 2024, 16(20), 3772; https://doi.org/10.3390/rs16203772 - 11 Oct 2024
Cited by 1 | Viewed by 1544
Abstract
Fugitive dust is an important source of total suspended particulate matter in urban ambient air. The existing segmentation methods for dust sources face challenges in distinguishing key and secondary features, and they exhibit poor segmentation at the image edge. To address these issues, [...] Read more.
Fugitive dust is an important source of total suspended particulate matter in urban ambient air. The existing segmentation methods for dust sources face challenges in distinguishing key and secondary features, and they exhibit poor segmentation at the image edge. To address these issues, this paper proposes the Dust Source U-Net (DSU-Net), enhancing the U-Net model by incorporating VGG16 for feature extraction, and integrating the shuffle attention module into the jump connection branch to enhance feature acquisition. Furthermore, we combine Dice Loss, Focal Loss, and Activate Boundary Loss to improve the boundary extraction accuracy and reduce the loss oscillation. To evaluate the effectiveness of our model, we selected Jingmen City, Jingzhou City, and Yichang City in Hubei Province as the experimental area and established two dust source datasets from 0.5 m high-resolution remote sensing imagery acquired by the Jilin-1 satellite. Our created datasets include dataset HDSD-A for dust source segmentation and dataset HDSD-B for distinguishing the dust control measures. Comparative analyses of our proposed model with other typical segmentation models demonstrated that our proposed DSU-Net has the best detection performance, achieving a mIoU of 93% on dataset HDSD-A and 92% on dataset HDSD-B. In addition, we verified that it can be successfully applied to detect dust sources in urban areas. Full article
Show Figures

Figure 1

13 pages, 8034 KB  
Article
Using Coefficient K to Distinguish Ambient/Focal Visual Attention During Cartographic Tasks
by Krzysztof Krejtz, Arzu Çöltekin, Andrew T. Duchowski and Anna Niedzielska
J. Eye Mov. Res. 2017, 10(2), 1-13; https://doi.org/10.16910/jemr.10.2.3 - 3 Apr 2017
Cited by 31 | Viewed by 207
Abstract
We demonstrate the use of the ambient/focal coefficient Κ for studying the dynamics of visual behavior when performing cartographic tasks. Participants viewed a cartographic map and satellite image of Barcelona while performing a number of map-related tasks. Cartographic maps can be viewed as [...] Read more.
We demonstrate the use of the ambient/focal coefficient Κ for studying the dynamics of visual behavior when performing cartographic tasks. Participants viewed a cartographic map and satellite image of Barcelona while performing a number of map-related tasks. Cartographic maps can be viewed as summary representations of reality, while satellite images are typically more veridical, and contain considerably more information. Our analysis of traditional eye movement metrics suggests that the satellite representation facilitates longer fixation durations, requiring greater scrutiny of the map. The cartographic map affords greater peripheral scanning, as evidenced by larger saccade amplitudes. Evaluation of Κ elucidates task dependence of ambient/focal attention dynamics when working with geographic visualizations: localization progresses from ambient to focal attention; route planning fluctuates in an ambient-focalambient pattern characteristic of the three stages of route end point localization, route following, and route confirmation. Full article
Show Figures

Figure 1

Back to TopTop