Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (142)

Search Parameters:
Keywords = optical frequency domain imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 6459 KB  
Article
Cooperative Hybrid Domain Network for Salient Object Detection in Optical Remote Sensing Images
by Yi Gu, Jianhang Zhou and Lelei Yan
Remote Sens. 2026, 18(7), 1087; https://doi.org/10.3390/rs18071087 - 4 Apr 2026
Viewed by 225
Abstract
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and [...] Read more.
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and spatial features via simple concatenation, addition, or direct combination. This shallow interaction overlooks the inherent semantic misalignment between the two domains, resulting in feature redundancy and poor boundary delineation. To address this limitation, we propose the Cooperative Hybrid Domain Network (CHDNet), a framework designed to facilitate synergistic cooperation between heterogeneous domains. Specifically, we propose the Cross-Domain Multi-Head Self-Attention (CD-MHSA) mechanism as a semantic bridge following the encoder. It employs a dimension expansion strategy to construct a Unified Interaction Manifold and utilizes a Frequency Anchor Interaction mechanism to achieve precise modulation of spatial textures using global spectral cues. Furthermore, to address the dual challenges of lacking explicit interpretation mechanisms for semantic co-occurrence and the susceptibility of topological structures to fracture in complex scenes during the decoding phase, we design a Multi-Branch Cooperative Decoder (MBCD) comprising three parallel paths: edge semantics, global relations, and reverse correction. This module dynamically integrates these heterogeneous clues through a Cooperative Fusion Strategy, combining explicit global dependency modeling with dual-domain reverse mining. Extensive experiments on multiple benchmark datasets demonstrate that the proposed CHDNet achieves performance superior to state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

15 pages, 5004 KB  
Article
Designing Reproducible Test Environments for rPPG: A System for Camera Sensor Response Validation
by Lieke Dorine van Putten, Ivan Veleslavov, Ayman Ahmed, Aristide Mathieu and Simon Wegerif
Lights 2026, 2(2), 3; https://doi.org/10.3390/lights2020003 - 25 Mar 2026
Viewed by 295
Abstract
Remote photoplethysmography (rPPG) enables non-contact vital sign measurements using standard smart device cameras, opening up the potential of scalable health applications on consumer smart devices. However, rPPG signal quality is highly sensitive to camera sensor characteristics and image processing pipelines, which can vary [...] Read more.
Remote photoplethysmography (rPPG) enables non-contact vital sign measurements using standard smart device cameras, opening up the potential of scalable health applications on consumer smart devices. However, rPPG signal quality is highly sensitive to camera sensor characteristics and image processing pipelines, which can vary between devices. This variation limits reproducibility and generalisation of rPPG-based algorithms beyond specific hardware platforms. This work presents a reproducible test environment for the validation of the camera sensor response in the context of rPPG signals. A microcontroller-driven illumination system and mechanically constrained setup are used to generate controlled, repeatable optical signals. Two characterisation tests are introduced: a time domain morphology analysis and a frequency domain attenuation analysis. Pulse timing consistency, pulse waveform morphology and normalised frequency responses are compared to assess sensor similarity. This method is applied to selected consumer devices and demonstrates consistent camera response patterns under the controlled test conditions. By explicitly addressing validation of the camera sensor and image processing pipeline, this work supports the development of more robust and transferable rPPG-based vital sign applications across a wider range of consumer devices. Full article
Show Figures

Figure 1

13 pages, 3307 KB  
Article
A Frequency-Aware Self-Supervised Framework for MEMS-OCT Denoising
by Gaolin Zhang, Zonghao Li, Hui Zhao, Zhe Peng and Huikai Xie
Biosensors 2026, 16(3), 177; https://doi.org/10.3390/bios16030177 - 21 Mar 2026
Viewed by 415
Abstract
Optical coherence tomography (OCT) is a key biological sensing and imaging tool widely used in biomedical detection, and its images are often degraded by multiplicative speckle noises—especially when micro-electro-mechanical system (MEMS) mirrors are employed in endoscopic OCT imaging, which reduces visual quality and [...] Read more.
Optical coherence tomography (OCT) is a key biological sensing and imaging tool widely used in biomedical detection, and its images are often degraded by multiplicative speckle noises—especially when micro-electro-mechanical system (MEMS) mirrors are employed in endoscopic OCT imaging, which reduces visual quality and affects the accuracy of subsequent analysis. Traditional denoising algorithms and supervised deep learning approaches have shown some effectiveness, but they are limited by their reliance on paired noisy–clean data and their insufficient modeling of global structural dependencies. To address these issues, this paper proposes a frequency-domain enhanced UNet based on the Neighbor2Neighbor (N2N) framework (FEN2N). The proposed FEN2N integrates wavelet-guided spectral pooling modules (WSPMs) and frequency-domain enhanced receptive field blocks (FE-RFBs). In this work, OCT images are obtained in a self-constructed MEMS-OCT system. Then the FEN2N is applied to the OCT image dataset. Results show that FEN2N achieves a more than 2.3 dB PSNR improvement over the N2N baseline, while the incorporation of FE-RFB contributes to a 0.02 improvement in SSIM. In addition, FEN2N outperforms several state-of-the-art methods, effectively suppressing speckle noise while preserving fine structural details that are important for clinical diagnosis. Full article
(This article belongs to the Section Optical and Photonic Biosensors)
Show Figures

Figure 1

41 pages, 8829 KB  
Review
Mechanisms, Sensors, and Signals for Defect Formation and In Situ Monitoring in Metal Additive Manufacturing
by Sanae Tajalli Nobari, Fabian Hanning, Yongcui Mi and Joerg Volpp
Eng 2026, 7(3), 129; https://doi.org/10.3390/eng7030129 - 11 Mar 2026
Viewed by 685
Abstract
Metal additive manufacturing (AM) facilitates the production of geometrically complex components, yet its broader industrial use remains limited by the risk of defect formation and uncertainties in their detection, originating from the highly dynamic and high-temperature process environment. To make additive manufacturing more [...] Read more.
Metal additive manufacturing (AM) facilitates the production of geometrically complex components, yet its broader industrial use remains limited by the risk of defect formation and uncertainties in their detection, originating from the highly dynamic and high-temperature process environment. To make additive manufacturing more reliable and establish high-quality parts, it is important to understand how these defects form and how their characteristics appear during the process. This review explains the main causes of common defects, such as cracking, porosity, lack of fusion, and inclusions in metal AM processes, including Powder Bed Fusion and Directed Energy Deposition. It also connects main defect formation mechanisms to the optical, thermal, acoustic, and spectroscopic signals that can be measured during the process. Moreover, it is described how commonly used in situ monitoring systems work and how their signals correspond to melt pool dynamics, vapor plume, particle movement, and the solidification process for each kind of defect. An overview is provided of how data from these systems are analyzed, including the extraction of features from images, the evaluation of temperature fields, and the use of time and frequency domain techniques for various signals. By linking the physics of defect formation to measurable process signals, the interpretation of sensor data is enabled, and potential strategies for monitoring specific problems are outlined. Finally, recent developments are examined, including the integration of multiple sensors, advanced feature-representation approaches, and real-time data interpretation coupled with adaptive control. Together, these directions represent promising advances towards more intelligent and reliable monitoring systems for the future of metal AM. Full article
(This article belongs to the Section Materials Engineering)
Show Figures

Figure 1

26 pages, 26398 KB  
Article
WEMFusion: Wavelet-Driven Hybrid-Modality Enhancement and Discrepancy-Aware Mamba for Optical–SAR Image Fusion
by Jinwei Wang, Yongjin Zhao, Liang Ma, Bo Zhao, Fujun Song and Zhuoran Cai
Remote Sens. 2026, 18(4), 612; https://doi.org/10.3390/rs18040612 - 15 Feb 2026
Viewed by 596
Abstract
Optical and synthetic aperture radar (SAR) imagery are highly complementary in terms of texture details and structural scattering characterization. However, their imaging mechanisms and statistical distributions differ substantially. In particular, pseudo-high-frequency components introduced by SAR coherent speckle can be easily entangled with genuine [...] Read more.
Optical and synthetic aperture radar (SAR) imagery are highly complementary in terms of texture details and structural scattering characterization. However, their imaging mechanisms and statistical distributions differ substantially. In particular, pseudo-high-frequency components introduced by SAR coherent speckle can be easily entangled with genuine optical edges, leading to texture mismatch, structural drift, and noise diffusion. To address these issues, we propose WEMFusion, a wavelet-prior-driven framework for frequency-domain decoupling and discrepancy-aware state-space fusion. Specifically, a multi-scale discrete wavelet transform (DWT) explicitly decomposes the inputs into low-frequency structural components and directional high-frequency sub-bands, providing an interpretable frequency-domain constraint for cross-modality alignment. We design a hybrid-modality enhancement (HME) module: in the high-frequency branch, it effectively injects optical edges and directional textures while suppressing the propagation of pseudo-high-frequency artifacts, and in the low-frequency branch, it reinforces global structural consistency and prevents speckle perturbations from leaking into the structural component, thereby mitigating structural drift. Furthermore, we introduce a discrepancy-aware gated Mamba fusion (DAG-MF) block, which generates dynamic gates from modality differences and complementary responses to modulate the parameters of a directionally scanned two-dimensional state-space model, so that long-range dependency modeling focuses on discrepant regions while preserving directional coherence. Extensive quantitative evaluations and qualitative comparisons demonstrate that WEMFusion consistently improves structural fidelity and edge detail preservation across multiple optical–SAR datasets, achieving superior fusion quality with lower computational overhead. Full article
Show Figures

Figure 1

18 pages, 1413 KB  
Article
Interpreting Modulation Transfer Function in Endoscopic Imaging: Spatial-Frequency Conversion Across Imaging Spaces and the Digital Image Domain with Case Studies
by Quanzeng Wang
Sensors 2026, 26(3), 827; https://doi.org/10.3390/s26030827 - 27 Jan 2026
Viewed by 386
Abstract
Endoscopes are widely used in medicine, making objective evaluation of imaging performance essential for device development and quality assurance. Image resolution is commonly characterized by the modulation transfer function (MTF); however, its interpretation depends critically on how spatial frequency is defined and reported. [...] Read more.
Endoscopes are widely used in medicine, making objective evaluation of imaging performance essential for device development and quality assurance. Image resolution is commonly characterized by the modulation transfer function (MTF); however, its interpretation depends critically on how spatial frequency is defined and reported. Because spatial frequency is directly tied to sampling, it can be expressed in different units across the imaging chain, including the object plane, image sensor plane, and digital image domain. Inconsistent conversion between these spaces and domains can mislead comparisons and even alter the apparent ranking of regions of interest (ROIs) or imaging systems. This work presents a systematic analysis of spatial-frequency relationships along the endoscopic imaging chain and provides a practical conversion and interpretation workflow for MTF analysis. The framework accounts for sensor sampling, in-camera processing, resampling or scaling, and geometric distortion. Because geometric distortion introduces position-dependent sampling across the field of view, ROI-specific local-magnification measurements are incorporated to convert measured MTFs to a consistent object space spatial-frequency axis. Two case studies illustrate the implications. First, an off-axis ROI may appear to outperform the image center when MTF is expressed in digital image domain cycles per pixel, but this conclusion reverses after conversion to object space cycles per millimeter using local magnification. Second, resampled image outputs can yield inflated MTF curves unless scaling differences between formats are explicitly incorporated into the spatial-frequency axis. Overall, the proposed conversion and reporting workflow enables consistent and physically meaningful MTF comparison across devices, ROIs, and acquisition configurations when geometric distortion, sampling, or resampling differs, clarifying how optics, sensor characteristics, and image processing jointly determine reported MTF results. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

13 pages, 1430 KB  
Article
Autofocusing Method Based on Dynamic Modulation Transfer Function Feedback
by Zhijing Fang, Yuanzhang Song, Bing Han, Anbang Wang, Jian Song and Hangyu Yue
Photonics 2026, 13(2), 107; https://doi.org/10.3390/photonics13020107 - 24 Jan 2026
Viewed by 384
Abstract
Accurate measurement of key optical system parameters (such as focal length, distortion, and modulation transfer function (MTF)) depends critically on obtaining sharp images. Conventional autofocus methods are susceptible to noise in complex imaging environments, prone to convergence to local optima, and often exhibit [...] Read more.
Accurate measurement of key optical system parameters (such as focal length, distortion, and modulation transfer function (MTF)) depends critically on obtaining sharp images. Conventional autofocus methods are susceptible to noise in complex imaging environments, prone to convergence to local optima, and often exhibit low efficiency. To address these limitations, this paper proposes a high-precision autofocus method based on dynamic MTF feedback. The method employs frequency-domain MTF as a real-time image sharpness metric, enhancing robustness in noisy conditions. For the search mechanism, particle swarm optimization (PSO) is combined with the golden-section search to establish a hybrid optimization framework of “global coarse localization–local fine search,” balancing convergence speed and focusing accuracy. Experimental results show that the proposed method achieves stable and efficient autofocus, providing reliable imaging assurance for high-precision measurement of optical system parameters and demonstrating strong engineering applicability. Full article
Show Figures

Figure 1

33 pages, 23667 KB  
Article
Full-Wave Optical Modeling of Leaf Internal Light Scattering for Early-Stage Fungal Disease Detection
by Da-Young Lee and Dong-Yeop Na
Agriculture 2026, 16(2), 286; https://doi.org/10.3390/agriculture16020286 - 22 Jan 2026
Viewed by 562
Abstract
Modifications in leaf architecture disrupt optical properties and internal light-scattering dynamics. Accurate modeling of leaf-scale light scattering is therefore essential not only for understanding how disease affects the availability of light for chlorophyll absorption, but also for evaluating its potential as an early [...] Read more.
Modifications in leaf architecture disrupt optical properties and internal light-scattering dynamics. Accurate modeling of leaf-scale light scattering is therefore essential not only for understanding how disease affects the availability of light for chlorophyll absorption, but also for evaluating its potential as an early optical marker for plant disease detection prior to visible symptom development. Conventional ray-tracing and radiative-transfer models rely on high-frequency approximations and thus fail to capture diffraction and coherent multiple-scattering effects when internal leaf structures are comparable to optical wavelengths. To overcome these limitations, we present a GPU-accelerated finite-difference time-domain (FDTD) framework for full-wave simulation of light propagation within plant leaves, using anatomically realistic dicot and monocot leaf cross-section geometries. Microscopic images acquired from publicly available sources were segmented into distinct tissue regions and assigned wavelength-dependent complex refractive indices to construct realistic electromagnetic models. The proposed FDTD framework successfully reproduced characteristic reflectance and transmittance spectra of healthy leaves across the visible and near-infrared (NIR) ranges. Quantitative agreement between the FDTD-computed spectral reflectance and transmittance and those predicted by the reference PROSPECT leaf optical model was evaluated using Lin’s concordance correlation coefficient. Higher concordance was observed for dicot leaves (Cb=0.90) than for monocot leaves (Cb=0.79), indicating a stronger agreement for anatomically complex dicot structures. Furthermore, simulations mimicking an early-stage fungal infection in a dicot leaf—modeled by the geometric introduction of melanized hyphae penetrating the cuticle and upper epidermis—revealed a pronounced reduction in visible green reflectance and a strong suppression of the NIR reflectance plateau. These trends are consistent with experimental observations reported in previous studies. Overall, this proof-of-concept study represents the first full-wave FDTD-based optical modeling of internal light scattering in plant leaves. The proposed framework enables direct electromagnetic analysis of pre- and post-penetration light-scattering dynamics during early fungal infection and establishes a foundation for exploiting leaf-scale light scattering as a next-generation, pre-symptomatic diagnostic indicator for plant fungal diseases. Full article
(This article belongs to the Special Issue Exploring Sustainable Strategies That Control Fungal Plant Diseases)
Show Figures

Figure 1

23 pages, 5986 KB  
Article
Modulation and Perturbation in Frequency Domain for SAR Ship Detection
by Mengqin Fu, Wencong Zhang, Xiaochen Quan, Dahu Shi, Luowei Tan, Jia Zhang, Yinghui Xing and Shizhou Zhang
Remote Sens. 2026, 18(2), 338; https://doi.org/10.3390/rs18020338 - 20 Jan 2026
Viewed by 342
Abstract
Synthetic Aperture Radar (SAR) has unique advantages in ship monitoring at sea due to its all-weather imaging capability. However, its unique imaging mechanism presents two major challenges. First, speckle noise in the frequency domain reduces the contrast between the target and the background. [...] Read more.
Synthetic Aperture Radar (SAR) has unique advantages in ship monitoring at sea due to its all-weather imaging capability. However, its unique imaging mechanism presents two major challenges. First, speckle noise in the frequency domain reduces the contrast between the target and the background. Second, side-lobe scattering blurs the ship outline, especially in nearshore complex scenes, and strong scattering characteristics make it difficult to separate the target from the background. The above two challenges significantly limit the performance of tailored CNN-based detection models in optical images when applied directly to SAR images. To address these challenges, this paper proposes a modulation and perturbation mechanism in the frequency domain based on a lightweight CNN detector. Specifically, the wavelet transform is firstly used to extract high-frequency features in different directions, and feature expression is dynamically adjusted according to the global statistical information to realize the selective enhancement of the ship edge and detail information. In terms of frequency-domain perturbation, a perturbation mechanism guided by frequency-domain weight is introduced to effectively suppress background interference while maintaining key target characteristics, which improves the robustness of the model in complex scenes. Extensive experiments on four widely adopted benchmark datasets, namely LS-SSDD-v1.0, SSDD, SAR-Ship-Dataset, and AIR-SARShip-2.0, demonstrate that our FMP-Net significantly outperforms 18 existing state-of-the-art methods, especially in complex nearshore scenes and sea surface interference scenes. Full article
Show Figures

Figure 1

13 pages, 2012 KB  
Article
Sub-Diffraction Photoacoustic Microscopy Enabled by a Novel Phase-Shifted Excitation Strategy: A Numerical Study
by George J. Tserevelakis
Sensors 2026, 26(2), 498; https://doi.org/10.3390/s26020498 - 12 Jan 2026
Viewed by 724
Abstract
This numerical simulation study introduces a novel phase-shifted Gaussian and donut beam excitation strategy for frequency-domain photoacoustic microscopy, capable of achieving optical sub-diffraction-limited lateral resolution. We demonstrate that the spatial overlapping of Gaussian and donut beams with π-radian phase-shifted intensity modulation may confine [...] Read more.
This numerical simulation study introduces a novel phase-shifted Gaussian and donut beam excitation strategy for frequency-domain photoacoustic microscopy, capable of achieving optical sub-diffraction-limited lateral resolution. We demonstrate that the spatial overlapping of Gaussian and donut beams with π-radian phase-shifted intensity modulation may confine the effective photoacoustic excitation region, substantially reducing the beam-waist-normalized full width at half maximum value from 1.177 to 0.828 units. This effect corresponds to a ~1.42-fold lateral resolution enhancement compared with conventional focused Gaussian beam excitation. Furthermore, the influence of the optical power ratio between the beams was systematically analyzed, revealing an optimal value of 1.16, balancing excitation confinement and side-lobe suppression. Within this framework, the presented simulation results establish a basis for the experimental realization of phase-shifted dual-beam excitation photoacoustic microscopy systems, with a potential impact on high-resolution biomedical imaging of subcellular and microvascular structures using low-cost continuous-wave optical sources such as laser diodes. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Biomedical Optics and Imaging)
Show Figures

Figure 1

22 pages, 3276 KB  
Article
AFR-CR: An Adaptive Frequency Domain Feature Reconstruction-Based Method for Cloud Removal via SAR-Assisted Remote Sensing Image Fusion
by Xiufang Zhou, Qirui Fang, Xunqiang Gong, Shuting Yang, Tieding Lu, Yuting Wan, Ailong Ma and Yanfei Zhong
Remote Sens. 2026, 18(2), 201; https://doi.org/10.3390/rs18020201 - 8 Jan 2026
Viewed by 613
Abstract
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and [...] Read more.
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and optical data and leveraging deep learning networks to reconstruct cloud-free optical imagery. However, these methods do not fully consider the characteristics of the frequency domain when processing feature integration, resulting in blurred edges of the generated cloudless optical images. Therefore, an adaptive frequency domain feature reconstruction-based cloud removal method is proposed to solve the problem. The proposed method comprises four key sequential stages. First, shallow features are extracted by fusing optical and SAR images. Second, a Transformer-based encoder captures multi-scale semantic features. Subsequently, the Frequency Domain Decoupling Module (FDDM) is employed. Utilizing a Dynamic Mask Generation mechanism, it explicitly decomposes features into low-frequency structures and high-frequency details, effectively suppressing cloud interference while preserving surface textures. Finally, robust information interaction is facilitated by the Cross-Frequency Reconstruction Module (CFRM) via transposed cross-attention, ensuring precise fusion and reconstruction. Experimental evaluation on the M3R-CR dataset confirms that the proposed approach achieves the best results on all four evaluated metrics, surpassing the performance of the eight other State-of-the-Art methods. It has demonstrated its effectiveness and advanced capabilities in the task of SAR-optical fusion for cloud removal. Full article
Show Figures

Figure 1

32 pages, 4104 KB  
Review
Toward Active Distributed Fiber-Optic Sensing: A Review of Distributed Fiber-Optic Photoacoustic Non-Destructive Testing Technology
by Yuliang Wu, Xuelei Fu, Jiapu Li, Xin Gui, Jinxing Qiu and Zhengying Li
Sensors 2026, 26(1), 59; https://doi.org/10.3390/s26010059 - 21 Dec 2025
Cited by 1 | Viewed by 1005
Abstract
Distributed fiber-optic photoacoustic non-destructive testing (DFP-NDT) represents a paradigm shift from passive sensing to active probing, fundamentally transforming structural health monitoring through integrated fiber-based ultrasonic generation and detection capabilities. This review systematically examines DFP-NDT’s evolution by following the technology’s natural progression from fundamental [...] Read more.
Distributed fiber-optic photoacoustic non-destructive testing (DFP-NDT) represents a paradigm shift from passive sensing to active probing, fundamentally transforming structural health monitoring through integrated fiber-based ultrasonic generation and detection capabilities. This review systematically examines DFP-NDT’s evolution by following the technology’s natural progression from fundamental principles to practical implementations. Unlike conventional approaches that require external excitation mechanisms, DFP-NDT leverages photoacoustic transducers as integrated active components where fiber-optical devices themselves generate and detect ultrasonic waves. Central to this technology are photoacoustic materials engineered to maximize conversion efficiency—from carbon nanotube-polymer composites achieving 2.74 × 10−2 conversion efficiency to innovative MXene-based systems that combine high photothermal conversion with structural protection functionality. These materials operate within sophisticated microstructural frameworks—including tilted fiber Bragg gratings, collapsed photonic crystal fibers, and functionalized polymer coatings—that enable precise control over optical-to-thermal-to-acoustic energy conversion. Six primary distributed fiber-optic photoacoustic transducer array (DFOPTA) methodologies have been developed to transform single-point transducers into multiplexed systems, with low-frequency variants significantly extending penetration capability while maintaining high spatial resolution. Recent advances in imaging algorithms have particular emphasis on techniques specifically adapted for distributed photoacoustic data, including innovative computational frameworks that overcome traditional algorithmic limitations through sophisticated statistical modeling. Documented applications demonstrate DFP-NDT’s exceptional versatility across structural monitoring scenarios, achieving impressive performance metrics including 90 × 54 cm2 coverage areas, sub-millimeter resolution, and robust operation under complex multimodal interference conditions. Despite these advances, key challenges remain in scaling multiplexing density, expanding operational robustness for extreme environments, and developing algorithms specifically optimized for simultaneous multi-source excitation. This review establishes a clear roadmap for future development where enhanced multiplexed architectures, domain-specific material innovations, and purpose-built computational frameworks will transition DFP-NDT from promising laboratory demonstrations to deployable industrial solutions for comprehensive structural integrity assessment. Full article
(This article belongs to the Special Issue FBG and UWFBG Sensing Technology)
Show Figures

Figure 1

21 pages, 1279 KB  
Article
Visible Light Communication vs. Optical Camera Communication: A Security Comparison Using the Risk Matrix Methodology
by Ignacio Marin-Garcia, Victor Guerra, Jose Rabadan and Rafael Perez-Jimenez
Photonics 2025, 12(12), 1201; https://doi.org/10.3390/photonics12121201 - 5 Dec 2025
Cited by 1 | Viewed by 911
Abstract
Optical Wireless Communication (OWC) technologies are emerging as promising complements to radio-frequency systems, offering high bandwidth, spatial confinement, and license-free operation. Within this domain, Visible Light Communication (VLC) and Optical Camera Communication (OCC) represent two distinct paradigms with divergent performance and security profiles. [...] Read more.
Optical Wireless Communication (OWC) technologies are emerging as promising complements to radio-frequency systems, offering high bandwidth, spatial confinement, and license-free operation. Within this domain, Visible Light Communication (VLC) and Optical Camera Communication (OCC) represent two distinct paradigms with divergent performance and security profiles. While VLC leverages LED-photodiode links for high-speed data transfer, OCC exploits ubiquitous image sensors to decode modulated light patterns, enabling flexible but lower-rate communication. Despite their potential, both remain vulnerable to various attacks, including eavesdropping, jamming, spoofing, and privacy breaches. This work applies—and extends—the Risk Matrix (RM) methodology to systematically evaluate the security of VLC and OCC across reconnaissance, denial, and exploitation phases. Unlike prior literature, which treats VLC and OCC separately and under incompatible threat definitions, we introduce a unified, domain-specific risk framework that maps empirical channel behavior and attack feasibility into a common set of impact and likelihood indices. A normalized risk rank (NRR) is proposed to enable a direct, quantitative comparison of heterogeneous attacks and technologies under a shared reference scale. By quantifying risks for representative threats—including war driving, Denial of Service (DoS) attacks, preshared key cracking, and Evil Twin attacks—our analysis shows that neither VLC nor OCC is intrinsically more secure; rather, their vulnerabilities are context-dependent, shaped by physical constraints, receiver architectures, and deployment environments. VLC tends to concentrate confidentiality-driven exposure due to optical leakage paths, whereas OCC is more sensitive to availability-related degradation under adversarial load. Overall, the main contribution of this work is the first unified, standards-aligned, and empirically grounded risk-assessment framework capable of comparing VLC and OCC on a common security scale. The findings highlight the need for technology-aware security strategies in future OWC deployments and demonstrate how an adapted RM methodology can identify priority areas for mitigation, design, and resource allocation. Full article
Show Figures

Figure 1

17 pages, 36863 KB  
Article
Multi-Feature Fusion for Fiber Optic Vibration Identification Based on Denoising Diffusion Probabilistic Models
by Keju Zhang, Tingshuo Wang, Jianwei Wu, Qin Zheng, Caiyi Chen and Jiaxiang Lin
Sensors 2025, 25(22), 7085; https://doi.org/10.3390/s25227085 - 20 Nov 2025
Viewed by 684
Abstract
Fiber optic vibration identification has significant applications in engineering fields, like security surveillance and structural health assessment. However, present methods primarily depend on either temporal–frequency domain or image features simply, challenging the simultaneous consideration of both image attributes and the temporal dependencies of [...] Read more.
Fiber optic vibration identification has significant applications in engineering fields, like security surveillance and structural health assessment. However, present methods primarily depend on either temporal–frequency domain or image features simply, challenging the simultaneous consideration of both image attributes and the temporal dependencies of vibration signals. Consequently, the performance of fiber optic vibration recognition remains subject to improvement, and its effectiveness further diminishes under conditions of uneven data distribution. Therefore, this study integrates residual neural networks, long short-term memory networks, and diffusion denoising probabilistic models to propose a fiber optic vibration recognition method DR-LSTM, which incorporates both image and temporal features while ensuring high recognition accuracy across balanced and imbalanced data distributions. Firstly, features of the Mel spectrum image and temporal characteristics of fiber optic vibration events are extracted. Subsequently, specialized neural network models are developed for categories with scarce data to produce similar images for data augmentation. Finally, the retrieved composite characteristics are employed to train recognition models, thereby improving recognition accuracy. Experiments were performed on datasets from natural environment and anthropogenic vibration, including for both balanced and imbalanced data distributions. The results show that on the two balanced datasets, the proposed model achieves improvements in classification accuracy of at least 0.67% and 7.4% compared to conventional methods. In the two imbalanced datasets, the model’s accuracy exceeds that of conventional models by a minimum of 18.79% and 2.4%. This validates the effectiveness and feasibility of DR-LSTM in enhancing recognition accuracy and addressing issues with imbalanced data distribution. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 4480 KB  
Article
FE-WRNet: Frequency-Enhanced Network for Visible Watermark Removal in Document Images
by Zhengli Chen, Yuwei Zhang, Jielu Yan, Xuekai Wei, Weizhi Xian, Qin Mao, Yi Qin and Tong Gao
Appl. Sci. 2025, 15(22), 12216; https://doi.org/10.3390/app152212216 - 18 Nov 2025
Viewed by 849
Abstract
In video pipelines, document content in recorded lectures, surveillance footage, and broadcasted materials is often overlaid with persistent visible watermarks. Such overlays greatly reduce the readability of document images and interfere with downstream tasks such as optical characteristic recognition (OCR). Despite extensive studies, [...] Read more.
In video pipelines, document content in recorded lectures, surveillance footage, and broadcasted materials is often overlaid with persistent visible watermarks. Such overlays greatly reduce the readability of document images and interfere with downstream tasks such as optical characteristic recognition (OCR). Despite extensive studies, no prior work has concurrently addressed the diverse text layouts and watermark styles commonly encountered in real-world scenarios. To address this gap, we introduce TextLogo, the first benchmark dataset specifically designed for this comprehensive setting. TextLogo encompasses 2000 training pairs and 200 test pairs, spanning a wide array of text layouts and 30 distinct watermark styles. Building on this foundation, we propose the frequency-enhanced watermark-removal network (FE-WRNet), a generative network that fuses information from the spatial domain and the wavelet domain. Our Fused Wavelet Convolution Mixer (FWCM) effectively captures both the body and the edge components of watermarks, thereby enhancing removal performance. Training is guided by a hybrid loss function—including pixel, perceptual, and wavelet-domain objectives—to preserve fine details and edge structures. Moreover, while this work focuses on single-image document watermark removal, the proposed spatial–wavelet fusion and high-frequency-aware loss are directly relevant to video processing tasks—e.g., frame-wise watermark removal and temporal restoration—because watermarks in video often persist across frames and require fidelity-preserving, temporally-consistent restoration. Extensive experiments on TextLogo demonstrate that FE-WRNet outperforms the strongest baseline and reduces the perceptual error by 10.6%. Moreover, the proposed model also generalizes effectively to natural-image watermark datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop