Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,115)

Search Parameters:
Keywords = bright light

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 720 KB  
Article
Super-Resolution Parameter Estimation Using Machine Learning-Assisted Spatial Mode Demultiplexing
by David R. Gozzard, John S. Wallis, Alex M. Frost, Joshua J. Collier, Nicolas Maron, Benjamin P. Dix-Matthews and Kevin Vinsen
Sensors 2025, 25(17), 5395; https://doi.org/10.3390/s25175395 - 1 Sep 2025
Abstract
We present the use of a light-weight machine learning (ML) model to estimate the separation and relative brightness of two incoherent light sources below the diffraction limit. We use a multi-planar light converter (MPLC) to implement spatial mode demultiplexing (SPADE) imaging. The ML [...] Read more.
We present the use of a light-weight machine learning (ML) model to estimate the separation and relative brightness of two incoherent light sources below the diffraction limit. We use a multi-planar light converter (MPLC) to implement spatial mode demultiplexing (SPADE) imaging. The ML model is trained, validated, and tested on data generated experimentally in the laboratory. The ML model accurately estimates the separation of the sources to up to two orders of magnitude below the diffraction limit when the sources are of comparable brightness, and provides accurate sub-diffraction separation resolution even when the sources differ in brightness by four orders of magnitude. The present results are limited by cross talk in the MPLC and support the potential use of ML-assisted SPADE for astronomical imaging below the diffraction limit. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

28 pages, 6643 KB  
Article
MINISTAR to STARLITE: Evolution of a Miniaturized Prototype for Testing Attitude Sensors
by Vanni Nardino, Cristian Baccani, Massimo Ceccherini, Massimo Cecchi, Francesco Focardi, Enrico Franci, Donatella Guzzi, Fabrizio Manna, Vasco Milli, Jacopo Pini, Lorenzo Salvadori and Valentina Raimondi
Sensors 2025, 25(17), 5360; https://doi.org/10.3390/s25175360 - 29 Aug 2025
Viewed by 145
Abstract
Star trackers are critical electro-optical devices used for satellite attitude determination, typically tested using Optical Ground Support Equipment (OGSE). Within the POR FESR 2014–2020 program (funded by Regione Toscana), we developed MINISTAR, a compact electro-optical prototype designed to generate synthetic star fields in [...] Read more.
Star trackers are critical electro-optical devices used for satellite attitude determination, typically tested using Optical Ground Support Equipment (OGSE). Within the POR FESR 2014–2020 program (funded by Regione Toscana), we developed MINISTAR, a compact electro-optical prototype designed to generate synthetic star fields in apparent motion for realistic ground-based testing of star trackers. MINISTAR supports simultaneous testing of up to three units, assessing optical, electronic, and on-board software performance. Its reduced size and weight allow for direct integration on the satellite platform, enabling testing in assembled configurations. The system can simulate bright celestial bodies (Sun, Earth, Moon), user-defined objects, and disturbances such as cosmic rays and stray light. Radiometric and geometric calibrations were successfully validated in laboratory conditions. Under the PR FESR TOSCANA 2021–2027 initiative (also funded by Regione Toscana), the concept was further developed into STARLITE (STAR tracker LIght Test Equipment), a next-generation OGSE with a higher Technology Readiness Level (TRL). Based largely on commercial off-the-shelf (COTS) components, STARLITE targets commercial maturity and enhanced functionality, meeting the increasing demand for compact, high-fidelity OGSE systems for pre-launch verification of attitude sensors. This paper describes the working principles of a generic system, as well as its main characteristics and the early advancements enabling the transition from the initial MINISTAR prototype to the next-generation STARLITE system. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

14 pages, 1235 KB  
Article
The Acute Effects of Morning Bright Light on the Human White Adipose Tissue Transcriptome: Exploratory Post Hoc Analysis
by Anhui Wang, Jeroen Vreijling, Aldo Jongejan, Valentina S. Rumanova, Ruth I. Versteeg, Andries Kalsbeek, Mireille J. Serlie, Susanne E. la Fleur, Peter H. Bisschop, Frank Baas and Dirk J. Stenvers
Clocks & Sleep 2025, 7(3), 45; https://doi.org/10.3390/clockssleep7030045 - 27 Aug 2025
Viewed by 282
Abstract
The circadian rhythm of the central brain clock in the suprachiasmatic nucleus (SCN) is synchronized by light. White adipose tissue (WAT) is one of the metabolic endocrine organs containing a molecular clock, and it is synchronized by the SCN. Excess WAT is a [...] Read more.
The circadian rhythm of the central brain clock in the suprachiasmatic nucleus (SCN) is synchronized by light. White adipose tissue (WAT) is one of the metabolic endocrine organs containing a molecular clock, and it is synchronized by the SCN. Excess WAT is a risk factor for health issues including type 2 diabetes mellitus (DM2). We hypothesized that bright-light exposure would affect the human WAT transcriptome. Therefore, we analyzed WAT biopsies from two previously performed randomized cross-over trials (trial 1: n = 8 lean, healthy men, and trial 2: n = 8 men with obesity and DM2). From 7:30 h onwards, all the participants were exposed to either bright or dim light. Five hours later, we performed a subcutaneous abdominal WAT biopsy. RNA-sequencing results showed major group differences between men with obesity and DM2 and lean, healthy men as well as a differential effect of bright-light exposure. For example, gene sets encoding proteins involved in oxidative phosphorylation or respiratory chain complexes were down-regulated under bright-light conditions in lean, healthy men but up-regulated in men with obesity and DM2. In addition to evident group differences between men with obesity and DM2 and healthy lean subjects, autonomic or neuroendocrine signals resulting from bright-light exposure also differentially affect the WAT transcriptome. Full article
(This article belongs to the Section Impact of Light & other Zeitgebers)
Show Figures

Figure 1

17 pages, 2310 KB  
Article
High-Performance X-Ray Detection and Optical Information Storage via Dual-Mode Luminescent Modulation in Na3KMg7(PO4)6:Eu
by Yanshuo Han, Yucheng Li, Xue Yang, Yibo Hu, Yuandong Ning, Meng Gu, Guibin Zhai, Sihan Yang, Jingkun Chen, Naixin Li, Kuan Ren, Jingtai Zhao and Qianli Li
Molecules 2025, 30(17), 3495; https://doi.org/10.3390/molecules30173495 - 26 Aug 2025
Viewed by 570
Abstract
Lanthanide-doped inorganic luminescent materials have been extensively studied and applied in X-ray detection and imaging, anti-counterfeiting, and optical information storage. However, many reported rare-earth-based luminescent materials show only single-mode optical responses, which limits their applications in complex scenarios. Here, we report a novel [...] Read more.
Lanthanide-doped inorganic luminescent materials have been extensively studied and applied in X-ray detection and imaging, anti-counterfeiting, and optical information storage. However, many reported rare-earth-based luminescent materials show only single-mode optical responses, which limits their applications in complex scenarios. Here, we report a novel Na3KMg7(PO4)6:Eu phosphor synthesized by a simple high-temperature solid-state method. The multi-color luminescence of Eu2+ and Eu3+ ions in a single matrix of Na3KMg7(PO4)6:Eu, known as radio-photoluminescence, is achieved through X-ray-induced ion reduction. It demonstrated a good linear response (R2 = 0.9897) and stable signal storage (storage days > 50 days) over a wide range of X-ray doses (maximum dose > 200 Gy). In addition, after X-ray irradiation, this material exhibits photochromic properties ranging from white to brown in a bright field and shows remarkable bleaching and recovery capabilities under 254 nm ultraviolet light or thermal stimulation. This dual-modal luminescent phosphor Na3KMg7(PO4)6:Eu, which combines photochromism and radio-photoluminescence, presents a dual-mode X-ray detection and imaging strategy and offers a comprehensive and novel solution for applications in anti-counterfeiting and optical information encryption. Full article
(This article belongs to the Special Issue Organic and Inorganic Luminescent Materials, 2nd Edition)
Show Figures

Figure 1

19 pages, 7241 KB  
Article
RICNET: Retinex-Inspired Illumination Curve Estimation for Low-Light Enhancement in Industrial Welding Scenes
by Chenbo Shi, Xiangyu Zhang, Delin Wang, Changsheng Zhu, Aiping Liu, Chun Zhang and Xiaobing Feng
Sensors 2025, 25(16), 5192; https://doi.org/10.3390/s25165192 - 21 Aug 2025
Viewed by 462
Abstract
Feature tracking is essential for welding crawler robots’ trajectory planning. As welding often occurs in dark environments like pipelines or ship hulls, the system requires low-light image capture for laser tracking. However, such images typically have poor brightness and contrast, degrading both weld [...] Read more.
Feature tracking is essential for welding crawler robots’ trajectory planning. As welding often occurs in dark environments like pipelines or ship hulls, the system requires low-light image capture for laser tracking. However, such images typically have poor brightness and contrast, degrading both weld seam feature extraction and trajectory anomaly detection accuracy. To address this, we propose a Retinex-based low-light enhancement network tailored for cladding scenarios. The network features an illumination curve estimation module and requires no paired or unpaired reference images during training, alleviating the need for cladding-specific datasets. It adaptively adjusts brightness, restores image details, and effectively suppresses noise. Extensive experiments on public (LOLv1 and LOLv2) and self-collected weld datasets show that our method outperformed existing approaches in PSNR, SSIM, and LPIPS. Additionally, weld seam segmentation under low-light conditions achieved 95.1% IoU and 98.9% accuracy, confirming the method’s effectiveness for downstream tasks in robotic welding. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 2717 KB  
Article
EASD: Exposure Aware Single-Step Diffusion Framework for Monocular Depth Estimation in Autonomous Vehicles
by Chenyuan Zhang and Deokwoo Lee
Appl. Sci. 2025, 15(16), 9130; https://doi.org/10.3390/app15169130 - 19 Aug 2025
Viewed by 264
Abstract
Monocular depth estimation (MDE) is a cornerstone of computer vision and is applied to diverse practical areas such as autonomous vehicles, robotics, etc., yet even the latest methods suffer substantial errors in high-dynamic-range (HDR) scenes where over- or under-exposure erases critical texture. To [...] Read more.
Monocular depth estimation (MDE) is a cornerstone of computer vision and is applied to diverse practical areas such as autonomous vehicles, robotics, etc., yet even the latest methods suffer substantial errors in high-dynamic-range (HDR) scenes where over- or under-exposure erases critical texture. To address this challenge in real-world autonomous driving scenarios, we propose the Exposure-Aware Single-Step Diffusion Framework for Monocular Depth Estimation (EASD). EASD leverages a pre-trained Stable Diffusion variational auto-encoder, freezing its encoder to extract exposure-robust latent RGB and depth representations. A single-step diffusion process then predicts the clean depth latent vector, eliminating iterative error accumulation and enabling real-time inference suitable for autonomous vehicle perception pipelines. To further enhance robustness under extreme lighting conditions, EASD introduces an Exposure-Aware Feature Fusion (EAF) module—an attention-based pyramid that dynamically modulates multi-scale features according to global brightness statistics. This mechanism suppresses bias in saturated regions while restoring detail in under-exposed areas. Furthermore, an Exposure-Balanced Loss (EBL) jointly optimises global depth accuracy, local gradient coherence and reliability in exposure-extreme regions—key metrics for safety-critical perception tasks such as obstacle detection and path planning. Experimental results on NYU-v2, KITTI, and related benchmarks demonstrate that EASD reduces absolute relative error by an average of 20% under extreme illumination, using only 60,000 labelled images. The framework achieves real-time performance (<50 ms per frame) and strikes a superior balance between accuracy, computational efficiency, and data efficiency, offering a promising solution for robust monocular depth estimation in challenging automotive lighting conditions such as tunnel transitions, night driving and sun glare. Full article
Show Figures

Figure 1

17 pages, 2482 KB  
Article
Coastline Identification with ASSA-Resnet Based Segmentation for Marine Navigation
by Yuhan Wang, Weixian Li, Zhengxun Zhou and Ning Wu
Appl. Sci. 2025, 15(16), 9113; https://doi.org/10.3390/app15169113 - 19 Aug 2025
Viewed by 266
Abstract
Real-time and accurate segmentation of coastlines is of paramount importance for the safe navigation of unmanned surface vessels (USVs). Classical methods such as U-Net and DeepLabV3 have been proven to be effective in coastline segmentation tasks. However, their performance substantially degrades in real-world [...] Read more.
Real-time and accurate segmentation of coastlines is of paramount importance for the safe navigation of unmanned surface vessels (USVs). Classical methods such as U-Net and DeepLabV3 have been proven to be effective in coastline segmentation tasks. However, their performance substantially degrades in real-world scenarios due to variations in lighting and environmental conditions, particularly from water surface reflections. This paper proposes an enhanced ResNet-50 model, namely ASSA-ResNet, for coastline segmentation for vision-based marine navigation. ASSA-ResNet integrates Atrous Spatial Pyramid Pooling (ASPP) to expand the model’s receptive field and incorporates a Global Channel Spatial Attention (GCSA) module to suppress interference from water reflections. Through feature pyramid fusion, ASSA-ResNet reinforces the semantic representation of features at various scales to ensure precise boundary delineation. The performance of ASSA-ResNet is validated with a dataset encompassing diverse brightness conditions and scenarios. Notably, mean Pixel Accuracy (mPA) and mean Intersection over Union (mIoU) of 98.90% and 98.17%, respectively, have been achieved on the self-constructed dataset, with corresponding values of 99.18% and 98.39% observed on the USVInland unmanned vessel dataset. Comparative analyses reveal that ASSA-ResNet outperforms the U-Net model by 1.78% in mPA and 2.9% in mIOU relative to the DeepLabV3 model. It also demonstrates enhancements of 1.85% in mPA and 3.19% in mIoU. On the USVInland dataset, ASSA-ResNet exhibits superior performance compared to U-Net, with improvements of 0.41% in mPA and 0.12% in mIoU, while surpassing DeepLabV3 by 0.33% in mPA and 0.21% in mIoU. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

21 pages, 2657 KB  
Article
A Lightweight Multi-Stage Visual Detection Approach for Complex Traffic Scenes
by Xuanyi Zhao, Xiaohan Dou, Jihong Zheng and Gengpei Zhang
Sensors 2025, 25(16), 5014; https://doi.org/10.3390/s25165014 - 13 Aug 2025
Viewed by 442
Abstract
In complex traffic environments, image degradation due to adverse factors such as haze, low illumination, and occlusion significantly compromises the performance of object detection systems in recognizing vehicles and pedestrians. To address these challenges, this paper proposes a robust visual detection framework that [...] Read more.
In complex traffic environments, image degradation due to adverse factors such as haze, low illumination, and occlusion significantly compromises the performance of object detection systems in recognizing vehicles and pedestrians. To address these challenges, this paper proposes a robust visual detection framework that integrates multi-stage image enhancement with a lightweight detection architecture. Specifically, an image preprocessing module incorporating ConvIR and CIDNet is designed to perform defogging and illumination enhancement, thereby substantially improving the perceptual quality of degraded inputs. Furthermore, a novel enhancement strategy based on the Horizontal/Vertical-Intensity color space is introduced to decouple brightness and chromaticity modeling, effectively enhancing structural details and visual consistency in low-light regions. In the detection phase, a lightweight state-space modeling network, Mamba-Driven Lightweight Detection Network with RT-DETR Decoding, is proposed for object detection in complex traffic scenes. This architecture integrates VSSBlock and XSSBlock modules to enhance detection performance, particularly for multi-scale and occluded targets. Additionally, a VisionClueMerge module is incorporated to strengthen the perception of edge structures by effectively fusing multi-scale spatial features. Experimental evaluations on traffic surveillance datasets demonstrate that the proposed method surpasses the mainstream YOLOv12s model in terms of mAP@50–90, achieving a performance gain of approximately 1.0 percentage point (from 0.759 to 0.769). While ensuring competitive detection accuracy, the model exhibits reduced parameter complexity and computational overhead, thereby demonstrating superior deployment adaptability and robustness. This framework offers a practical and effective solution for object detection in intelligent transportation systems operating under visually challenging conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 4583 KB  
Article
Bright Blue Light Emission of ZnCl2-Doped CsPbCl1Br2 Perovskite Nanocrystals with High Photoluminescence Quantum Yield
by Bo Feng, Youbin Fang, Jin Wang, Xi Yuan, Jihui Lang, Jian Cao, Jie Hua and Xiaotian Yang
Micromachines 2025, 16(8), 920; https://doi.org/10.3390/mi16080920 - 9 Aug 2025
Viewed by 455
Abstract
The future development of perovskite light-emitting diodes (LEDs) is significantly limited by the poor stability and low brightness of the pure-blue emission in the wavelength range of 460–470 nm. In this study, the Cl/Br element ratio in CsPbClxBr3−x perovskite nanocrystals [...] Read more.
The future development of perovskite light-emitting diodes (LEDs) is significantly limited by the poor stability and low brightness of the pure-blue emission in the wavelength range of 460–470 nm. In this study, the Cl/Br element ratio in CsPbClxBr3−x perovskite nanocrystals (NCs) was modulated to precisely control their blue emission in the 428–512 nm spectral region. Then, the undoped CsPbCl1Br2 and the ZnCl2-doped CsPbCl1Br2 perovskite NCs were synthesized via the hot-injection method and investigated using variable-temperature photoluminescence (PL) spectroscopy. The PL emission peak of the ZnCl2-doped CsPbCl1Br2 perovskite NCs exhibits a blue shift from 475 nm to 460 nm with increasing ZnCl2 doping concentration. Additionally, the ZnCl2-doped CsPbCl1Br2 perovskite NCs show a high photoluminescence quantum yield (PLQY). The variable-temperature PL spectroscopy results show that the ZnCl2-doped CsPbCl1Br2 perovskite NCs have a larger exciton binding energy than the CsPbCl1Br2 perovskite NCs, which is indicative of a potentially higher PL intensity. To assess the stability of the perovskite NCs, high-temperature experiments and ultraviolet-irradiation experiments were conducted. The results indicate that zinc doping is beneficial for improving the stability of the perovskite NCs. The ZnCl2-doped CsPbCl1Br2 perovskite NCs were post-treated using didodecylammonium bromide, and after the post-treatment, the PLQY increased to 83%. This is a high PLQY value for perovskite NC-LEDs in the blue spectral range, and it satisfies the requirements of practical display applications. This work thus provides a simple preparation method for pure blue light-emitting materials. Full article
(This article belongs to the Special Issue Advanced Optoelectronic Materials/Devices and Their Applications)
Show Figures

Figure 1

18 pages, 1730 KB  
Article
Knowledge Distillation with Geometry-Consistent Feature Alignment for Robust Low-Light Apple Detection
by Yuanping Shi, Yanheng Ma, Liang Geng, Lina Chu, Bingxuan Li and Wei Li
Sensors 2025, 25(15), 4871; https://doi.org/10.3390/s25154871 - 7 Aug 2025
Viewed by 365
Abstract
Apple-detection performance in orchards degrades markedly under low-light conditions, where intensified noise and non-uniform exposure blur edge cues critical for precise localisation. We propose Knowledge Distillation with Geometry-Consistent Feature Alignment (KDFA), a compact end-to-end framework that couples image enhancement and detection through the [...] Read more.
Apple-detection performance in orchards degrades markedly under low-light conditions, where intensified noise and non-uniform exposure blur edge cues critical for precise localisation. We propose Knowledge Distillation with Geometry-Consistent Feature Alignment (KDFA), a compact end-to-end framework that couples image enhancement and detection through the following two complementary components: (i) Cross-Domain Mutual-Information-Bound Knowledge Distillation, which maximises an InfoNCE lower bound between daylight-teacher and low-light-student region embeddings; (ii) Geometry-Consistent Feature Alignment, which imposes Laplacian smoothness and bipartite graph correspondences across multiscale feature lattices. Trained on 1200 pixel-aligned bright/low-light image pairs, KDFA achieves 51.3% mean Average Precision (mAPQ [0.50:0.95]) on a challenging low-light apple-detection benchmark, setting a new state of the art by simultaneously bridging the illumination-domain gap and preserving geometric consistency. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

27 pages, 4681 KB  
Article
Gecko-Inspired Robots for Underground Cable Inspection: Improved YOLOv8 for Automated Defect Detection
by Dehai Guan and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 3142; https://doi.org/10.3390/electronics14153142 - 6 Aug 2025
Viewed by 458
Abstract
To enable intelligent inspection of underground cable systems, this study presents a gecko-inspired quadruped robot that integrates multi-degree-of-freedom motion with a deep learning-based visual detection system. Inspired by the gecko’s flexible spine and leg structure, the robot exhibits strong adaptability to confined and [...] Read more.
To enable intelligent inspection of underground cable systems, this study presents a gecko-inspired quadruped robot that integrates multi-degree-of-freedom motion with a deep learning-based visual detection system. Inspired by the gecko’s flexible spine and leg structure, the robot exhibits strong adaptability to confined and uneven tunnel environments. The motion system is modeled using the standard Denavit–Hartenberg (D–H) method, with both forward and inverse kinematics derived analytically. A zero-impact foot trajectory is employed to achieve stable gait planning. For defect detection, the robot incorporates a binocular vision module and an enhanced YOLOv8 framework. The key improvements include a lightweight feature fusion structure (SlimNeck), a multidimensional coordinate attention (MCA) mechanism, and a refined MPDIoU loss function, which collectively improve the detection accuracy of subtle defects such as insulation aging, micro-cracks, and surface contamination. A variety of data augmentation techniques—such as brightness adjustment, Gaussian noise, and occlusion simulation—are applied to enhance robustness under complex lighting and environmental conditions. The experimental results validate the effectiveness of the proposed system in both kinematic control and vision-based defect recognition. This work demonstrates the potential of integrating bio-inspired mechanical design with intelligent visual perception to support practical, efficient cable inspection in confined underground environments. Full article
(This article belongs to the Special Issue Robotics: From Technologies to Applications)
Show Figures

Figure 1

21 pages, 49475 KB  
Article
NRGS-Net: A Lightweight Uformer with Gated Positional and Local Context Attention for Nighttime Road Glare Suppression
by Ruoyu Yang, Huaixin Chen, Sijie Luo and Zhixi Wang
Appl. Sci. 2025, 15(15), 8686; https://doi.org/10.3390/app15158686 - 6 Aug 2025
Viewed by 254
Abstract
Existing nighttime visibility enhancement methods primarily focus on improving overall brightness under low-light conditions. However, nighttime road images are also affected by glare, glow, and flare from complex light sources such as streetlights and headlights, making it challenging to suppress locally overexposed regions [...] Read more.
Existing nighttime visibility enhancement methods primarily focus on improving overall brightness under low-light conditions. However, nighttime road images are also affected by glare, glow, and flare from complex light sources such as streetlights and headlights, making it challenging to suppress locally overexposed regions and recover fine details. To address these challenges, we propose a Nighttime Road Glare Suppression Network (NRGS-Net) for glare removal and detail restoration. Specifically, to handle diverse glare disturbances caused by the uncertainty in light source positions and shapes, we designed a gated positional attention (GPA) module that integrates positional encoding with local contextual information to guide the network in accurately locating and suppressing glare regions, thereby enhancing the visibility of affected areas. Furthermore, we introduced an improved Uformer backbone named LCAtransformer, in which the downsampling layers adopt efficient depthwise separable convolutions to reduce computational cost while preserving critical spatial information. The upsampling layers incorporate a residual PixelShuffle module to achieve effective restoration in glare-affected regions. Additionally, channel attention is introduced within the Local Context-Aware Feed-Forward Network (LCA-FFN) to enable adaptive adjustment of feature weights, effectively suppressing irrelevant and interfering features. To advance the research in nighttime glare suppression, we constructed and publicly released the Night Road Glare Dataset (NRGD) captured in real nighttime road scenarios, enriching the evaluation system for this task. Experiments conducted on the Flare7K++ and NRGD, using five evaluation metrics and comparing six state-of-the-art methods, demonstrate that our method achieves superior performance in both subjective and objective metrics compared to existing advanced methods. Full article
(This article belongs to the Special Issue Computational Imaging: Algorithms, Technologies, and Applications)
Show Figures

Figure 1

13 pages, 2224 KB  
Article
Digital Eye Strain Monitoring for One-Hour Smartphone Engagement Through Eye Activity Measurement System
by Bhanu Priya Dandumahanti, Prithvi Krishna Chittoor and Murali Subramaniyam
J. Eye Mov. Res. 2025, 18(4), 34; https://doi.org/10.3390/jemr18040034 - 5 Aug 2025
Viewed by 1402
Abstract
Smartphones have revolutionized our daily lives, becoming portable pocket computers with easy internet access. India, the second-highest smartphone and internet user, experienced a significant rise in smartphone usage between 2013 and 2024. Prolonged smartphone use, exceeding 20 min at a time, can lead [...] Read more.
Smartphones have revolutionized our daily lives, becoming portable pocket computers with easy internet access. India, the second-highest smartphone and internet user, experienced a significant rise in smartphone usage between 2013 and 2024. Prolonged smartphone use, exceeding 20 min at a time, can lead to physical and mental health issues, including psychophysiological disorders. Digital devices and their extended exposure to blue light cause digital eyestrain, sleep disorders and visual-related problems. This research examines the impact of 1 h smartphone usage on visual fatigue among young Indian adults. A portable, low-cost system has been developed to measure visual activity to address this. The developed visual activity measurement system measures blink rate, inter-blink interval, and pupil diameter. Measured eye activity was recorded during 1 h smartphone usage of e-book reading, video watching, and social-media reels (short videos). Social media reels show increased screen variations, affecting pupil dilation and reducing blink rate due to continuous screen brightness and intensity changes. This reduction in blink rate and increase in inter-blink interval or pupil dilation could lead to visual fatigue. Full article
Show Figures

Graphical abstract

14 pages, 1959 KB  
Article
Influence of Molecular Weight of Anthraquinone Acid Dyes on Color Strength, Migration, and UV Protection of Polyamide 6 Fabrics
by Nawshin Farzana, Abu Naser Md Ahsanul Haque, Shamima Akter Smriti, Abu Sadat Muhammad Sayem, Fahmida Siddiqa, Md Azharul Islam, Md Nasim and S M Kamrul Hasan
Physchem 2025, 5(3), 31; https://doi.org/10.3390/physchem5030031 - 4 Aug 2025
Viewed by 448
Abstract
Anthraquinone acid dyes are widely used in dyeing polyamide due to their good exhaustion and brightness. While ionic interactions primarily govern dye–fiber bonding, the molecular weight (Mw) of these dyes can significantly influence migration, apparent color strength, and fastness behavior. This study offers [...] Read more.
Anthraquinone acid dyes are widely used in dyeing polyamide due to their good exhaustion and brightness. While ionic interactions primarily govern dye–fiber bonding, the molecular weight (Mw) of these dyes can significantly influence migration, apparent color strength, and fastness behavior. This study offers comparative insight into how the Mw of structurally similar anthraquinone acid dyes impacts their diffusion, fixation, and functional outcomes (e.g., UV protection) on polyamide 6 fabric, using Acid Blue 260 (Mw~564) and Acid Blue 127:1 (Mw~845) as representative low- and high-Mw dyes. The effects of dye concentration, pH, and temperature on color strength (K/S) were evaluated, migration index and zeta potential were measured, and UV protection factor (UPF) and FTIR analyses were used to assess fabric functionality. Results showed that the lower-Mw dye exhibited higher migration tendency, particularly at increased dye concentrations, while the higher-Mw dye demonstrated greater color strength and superior wash fastness. Additionally, improved UPF ratings were associated with higher-Mw dye due to enhanced light absorption. These findings offer practical insights for optimizing acid dye selection in polyamide coloration to balance color performance and functional attributes. Full article
(This article belongs to the Section Surface Science)
Show Figures

Figure 1

19 pages, 2733 KB  
Article
Quantifying Threespine Stickleback Gasterosteus aculeatus L. (Perciformes: Gasterosteidae) Coloration for Population Analysis: Method Development and Validation
by Ekaterina V. Nadtochii, Anna S. Genelt-Yanovskaya, Evgeny A. Genelt-Yanovskiy, Mikhail V. Ivanov and Dmitry L. Lajus
Hydrobiology 2025, 4(3), 20; https://doi.org/10.3390/hydrobiology4030020 - 31 Jul 2025
Viewed by 354
Abstract
Fish coloration plays an important role in reproduction and camouflage, yet capturing color variation under field conditions remains challenging. We present a standardized, semi-automated protocol for measuring body coloration in the popular model fish threespine stickleback (Gasterosteus aculeatus). Individuals are photographed [...] Read more.
Fish coloration plays an important role in reproduction and camouflage, yet capturing color variation under field conditions remains challenging. We present a standardized, semi-automated protocol for measuring body coloration in the popular model fish threespine stickleback (Gasterosteus aculeatus). Individuals are photographed in a controlled light box within minutes of capture, and color is sampled from eight anatomically defined standard sites in human-perception-based CIELAB space. Analyses combine univariate color metrics, multivariate statistics, and the ΔE* perceptual difference index to detect subtle shifts in hue and brightness. Validation on pre-spawning fish shows the method reliably distinguishes males and females well before full breeding colors develop. Although it currently omits ultraviolet signals and fine-scale patterning, the approach scales efficiently to large sample sizes and varying lighting conditions, making it well suited for population-level surveys of camouflage dynamics, sexual dimorphism, and environmental influences on coloration in sticklebacks. Full article
Show Figures

Figure 1

Back to TopTop