Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (435)

Search Parameters:
Keywords = multichannel image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 48075 KB  
Article
Directional Lighting-Based Deep Learning Models for Crack and Spalling Classification
by Sanjeetha Pennada, Jack McAlorum, Marcus Perry, Hamish Dow and Gordon Dobie
J. Imaging 2025, 11(9), 288; https://doi.org/10.3390/jimaging11090288 - 25 Aug 2025
Viewed by 396
Abstract
External lighting is essential for autonomous inspections of concrete structures in low-light environments. However, previous studies have primarily relied on uniformly diffused lighting to illuminate images and faced challenges in detecting complex crack patterns. This paper proposes two novel algorithms that use directional [...] Read more.
External lighting is essential for autonomous inspections of concrete structures in low-light environments. However, previous studies have primarily relied on uniformly diffused lighting to illuminate images and faced challenges in detecting complex crack patterns. This paper proposes two novel algorithms that use directional lighting to classify concrete defects. The first method, named fused neural network, uses the maximum intensity pixel-level image fusion technique and selects the maximum intensity pixel values from all directional images for each pixel to generate a fused image. The second proposed method, named multi-channel neural network, generates a five-channel image, with each channel representing the grayscale version of images captured in the Right (R), Down (D), Left (L), Up (U), and Diffused (A) directions, respectively. The proposed multi-channel neural network model achieved the best performance, with accuracy, precision, recall, and F1 score of 96.6%, 96.3%, 97%, and 96.6%, respectively. It also outperformed the FusedNet and other models found in the literature, with no significant change in evaluation time. The results from this work have the potential to improve concrete crack classification in environments where external illumination is required. Future research focuses on extending the concepts of multi-channel and image fusion to white-box techniques. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

22 pages, 9740 KB  
Article
A Novel Error Correction Method for Airborne HRWS SAR Based on Azimuth-Variant Attitude and Range-Variant Doppler Domain Pattern
by Yihao Xu, Fubo Zhang, Longyong Chen, Yangliang Wan and Tao Jiang
Remote Sens. 2025, 17(16), 2831; https://doi.org/10.3390/rs17162831 - 14 Aug 2025
Viewed by 387
Abstract
In high-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) imaging, the azimuth multi-channel technique effectively suppresses azimuth ambiguity, serving as a reliable approach for achieving wide-swath imaging. However, due to mechanical vibrations of the platform and airflow instabilities, airborne SAR may experience errors [...] Read more.
In high-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) imaging, the azimuth multi-channel technique effectively suppresses azimuth ambiguity, serving as a reliable approach for achieving wide-swath imaging. However, due to mechanical vibrations of the platform and airflow instabilities, airborne SAR may experience errors in attitude and flight path during operation. Furthermore, errors also exist in the antenna patterns, frequency stability, and phase noise among the azimuth multi-channels. The presence of these errors can cause azimuth multi-channel reconstruction failure, resulting in azimuth ambiguity and significantly degrading the quality of HRWS images. This article presents a novel error correction method for airborne HRWS SAR based on azimuth-variant attitude and range-variant Doppler domain pattern, which simultaneously considers the effects of various errors, including channel attitude errors and Doppler domain antenna pattern errors, on azimuth reconstruction. Attitude errors are the primary cause of azimuth-variant errors between channels. This article uses the vector method and attitude transformation matrix to calculate and compensate for the attitude errors of azimuth multi-channels, and employs the two-dimensional frequency-domain echo interferometry method to calculate the fixed delay errors and fixed phase errors. To better achieve channel error compensation, this scheme also considers the estimation and compensation of Doppler domain antenna pattern errors in wide-swath scenes. Finally, the effectiveness of the proposed scheme is confirmed through simulations and processing of airborne real data. Full article
Show Figures

Figure 1

19 pages, 7157 KB  
Article
Fault Diagnosis Method of Micro-Motor Based on Jump Plus AM-FM Mode Decomposition and Symmetrized Dot Pattern
by Zhengyang Gu, Yufang Bai, Junsong Yu and Junli Chen
Actuators 2025, 14(8), 405; https://doi.org/10.3390/act14080405 - 13 Aug 2025
Viewed by 341
Abstract
Micro-motors are essential for power drive systems, and efficient fault diagnosis is crucial to reduce safety risks and economic losses caused by failures. However, the fault signals from micro-motors typically exhibit weak and unclear characteristics. To address this challenge, this paper proposes a [...] Read more.
Micro-motors are essential for power drive systems, and efficient fault diagnosis is crucial to reduce safety risks and economic losses caused by failures. However, the fault signals from micro-motors typically exhibit weak and unclear characteristics. To address this challenge, this paper proposes a novel fault diagnosis method that integrates jump plus AM-FM mode decomposition (JMD), symmetrized dot pattern (SDP) visualization, and an improved convolutional neural network (ICNN). Firstly, we employed the jump plus AM-FM mode decomposition technique to decompose the mixed fault signals, addressing the problem of mode mixing in traditional decomposition methods. Then, the intrinsic mode functions (IMFs) decomposed by JMD serve as the multi-channel inputs for symmetrized dot pattern, constructing a two-dimensional polar coordinate petal image. This process achieves both signal reconstruction and visual enhancement of fault features simultaneously. Finally, this paper designed an ICNN method with LeakyReLU activation function to address the vanishing gradient problem and enhance classification accuracy and training efficiency for fault diagnosis. Experimental results indicate that the proposed JMD-SDP-ICNN method outperforms traditional methods with a significantly superior fault classification accuracy of up to 99.2381%. It can offer a potential solution for the monitoring of electromechanical structures under complex conditions. Full article
(This article belongs to the Section Actuators for Manufacturing Systems)
Show Figures

Figure 1

17 pages, 3354 KB  
Article
Quantitative Analysis of Adulteration in Anoectochilus roxburghii Powder Using Hyperspectral Imaging and Multi-Channel Convolutional Neural Network
by Ziyuan Liu, Tingsong Zhang, Haoyuan Ding, Zhangting Wang, Hongzhen Wang, Lu Zhou, Yujia Dai and Yiqing Xu
Agronomy 2025, 15(8), 1894; https://doi.org/10.3390/agronomy15081894 - 6 Aug 2025
Viewed by 373
Abstract
Adulteration detection in medicinal plant powders remains a critical challenge in quality control. In this study, we propose a hyperspectral imaging (HSI)-based method combined with deep learning models to quantitatively analyze adulteration levels in Anoectochilus roxburghii powder. After preprocessing the spectral data using [...] Read more.
Adulteration detection in medicinal plant powders remains a critical challenge in quality control. In this study, we propose a hyperspectral imaging (HSI)-based method combined with deep learning models to quantitatively analyze adulteration levels in Anoectochilus roxburghii powder. After preprocessing the spectral data using raw, first-order, and second-order Savitzky–Golay derivatives, we systematically evaluated the performance of traditional machine learning models (Random Forest, Support Vector Regression, Partial Least Squares Regression) and deep learning architectures. While traditional models achieved reasonable accuracy (R2 up to 0.885), their performance was limited by feature extraction and generalization ability. A single-channel convolutional neural network (CNN) utilizing individual spectral representations improved performance marginally (maximum R2 = 0.882), but still failed to fully capture the multi-scale spectral features. To overcome this, we developed a multi-channel CNN that simultaneously integrates raw, SG-1, and SG-2 spectra, effectively leveraging complementary spectral information. This architecture achieved a significantly higher prediction accuracy (R2 = 0.964, MSE = 0.005), demonstrating superior robustness and generalization. The findings highlight the potential of multi-channel deep learning models in enhancing quantitative adulteration detection and ensuring the authenticity of herbal products. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

14 pages, 1971 KB  
Article
High-Density Arrayed Spectrometer with Microlens Array Grating for Multi-Channel Parallel Spectral Analysis
by Fangyuan Zhao, Zhigang Feng and Shuonan Shan
Sensors 2025, 25(15), 4833; https://doi.org/10.3390/s25154833 - 6 Aug 2025
Viewed by 477
Abstract
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) [...] Read more.
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) is proposed, which addresses the major limitations of conventional spectrometers, including limited parallel detection capability, bulky structures, and insufficient spatial resolution. By integrating dispersion and focusing within a monolithic device, the system enables simultaneous acquisition across more than 2000 parallel channels within a 10 mm × 10 mm unit consisting of an f = 4 mm microlens and a 600 lines/mm blazed grating. Optimized microlens and aperture alignment allows for flexible control of the divergence angle of the incident light, and the system theoretically achieves nanometer-scale spectral resolution across a 380–780 nm wavelength range, with inter-channel measurement deviation below 1.25%. Experimental results demonstrate that this spectrometer system can theoretically support up to 2070 independently addressable subunits. At a wavelength of 638 nm, the coefficient of variation (CV) of spot spacing among array elements is as low as 1.11%, indicating high uniformity. The spectral repeatability precision is better than 1.0 nm, and after image enhancement, the standard deviation of the diffracted light shift is reduced to just 0.26 nm. The practical spectral resolution achieved is as fine as 3.0 nm. This platform supports wafer-level spectral screening of high-density Micro-LEDs, offering a practical hardware solution for high-precision industrial inline sorting, such as Micro-LED defect inspection. Full article
Show Figures

Figure 1

30 pages, 15717 KB  
Article
Channel Amplitude and Phase Error Estimation of Fully Polarimetric Airborne SAR with 0.1 m Resolution
by Jianmin Hu, Yanfei Wang, Jinting Xie, Guangyou Fang, Huanjun Chen, Yan Shen, Zhenyu Yang and Xinwen Zhang
Remote Sens. 2025, 17(15), 2699; https://doi.org/10.3390/rs17152699 - 4 Aug 2025
Viewed by 408
Abstract
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution [...] Read more.
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution degradation and paired echoes caused by multichannel amplitude–phase mismatch in fully polarimetric airborne SAR with 0.1 m resolution, an amplitude–phase error estimation algorithm based on echo data is proposed in this paper. Firstly, the subband amplitude spectrum correction curve is obtained by the statistical average of the subband amplitude spectrum. Secondly, the paired-echo broadening function is obtained by selecting high-quality sample points after single-band imaging and the nonlinear phase error within the subbands is estimated via Sinusoidal Frequency Modulation Fourier Transform (SMFT). Thirdly, based on the minimum entropy criterion of the synthesized compressed pulse image, residual linear phase errors between subbands are quickly acquired. Finally, two-dimensional cross-correlation of the image slice is utilized to estimate the positional deviation between polarization channels. This method only requires high-quality data samples from the echo data, then rapidly estimates both intra-band and inter-band amplitude/phase errors by using SMFT and the minimum entropy criterion, respectively, with the characteristics of low computational complexity and fast convergence speed. The effectiveness of this method is verified by the imaging results of the experimental data. Full article
Show Figures

Graphical abstract

17 pages, 3439 KB  
Article
Delay Prediction Through Multi-Channel Traffic and Weather Scene Image: A Deep Learning-Based Method
by Ligang Yuan, Linghua Kong and Haiyan Chen
Appl. Sci. 2025, 15(15), 8604; https://doi.org/10.3390/app15158604 - 3 Aug 2025
Viewed by 367
Abstract
Accurate prediction of airport delays under convective weather conditions is essential for effective traffic coordination and improving overall airport efficiency. Traditional methods mainly rely on numerical weather and traffic indicators, but they often fail to capture the spatial distribution of traffic flows within [...] Read more.
Accurate prediction of airport delays under convective weather conditions is essential for effective traffic coordination and improving overall airport efficiency. Traditional methods mainly rely on numerical weather and traffic indicators, but they often fail to capture the spatial distribution of traffic flows within the terminal area. To address this limitation, we propose a novel image-based representation named Multi-Channel Traffic and Weather Scene Image (MTWSI), which maps both meteorological and traffic information onto a two-dimensional airspace grid, thereby preserving spatial relationships. Based on the MTWSI, we develop a delay prediction model named ADLCNN. This model first uses a convolutional neural network to extract deep spatial features from the scene images and then classifies each sample into a delay level. Using real operational data from Guangzhou Baiyun Airport, this paper shows that ADLCNN achieves significantly higher prediction accuracy compared to traditional machine learning methods. The results confirm that MTWSI provides a more accurate representation of real traffic conditions under convective weather. Full article
Show Figures

Figure 1

21 pages, 4688 KB  
Article
Nondestructive Inspection of Steel Cables Based on YOLOv9 with Magnetic Flux Leakage Images
by Min Zhao, Ning Ding, Zehao Fang, Bingchun Jiang, Jiaming Zhong and Fuqin Deng
J. Sens. Actuator Netw. 2025, 14(4), 80; https://doi.org/10.3390/jsan14040080 - 1 Aug 2025
Viewed by 635
Abstract
The magnetic flux leakage (MFL) method is widely acknowledged as a highly effective non-destructive evaluation (NDE) technique for detecting local damage in ferromagnetic structures such as steel wire ropes. In this study, a multi-channel MFL sensor module was developed, incorporating a purpose-designed Hall [...] Read more.
The magnetic flux leakage (MFL) method is widely acknowledged as a highly effective non-destructive evaluation (NDE) technique for detecting local damage in ferromagnetic structures such as steel wire ropes. In this study, a multi-channel MFL sensor module was developed, incorporating a purpose-designed Hall sensor array and magnetic yokes specifically shaped for steel cables. To validate the proposed damage detection method, artificial damages of varying degrees were inflicted on wire rope specimens through experimental testing. The MFL sensor module facilitated the scanning of the damaged specimens and measurement of the corresponding MFL signals. In order to improve the signal-to-noise ratio, a comprehensive set of signal processing steps, including channel equalization and normalization, was implemented. Subsequently, the detected MFL distribution surrounding wire rope defects was transformed into MFL images. These images were then analyzed and processed utilizing an object detection method, specifically employing the YOLOv9 network, which enables accurate identification and localization of defects. Furthermore, a quantitative defect detection method based on image size was introduced, which is effective for quantifying defects using the dimensions of the anchor frame. The experimental results demonstrated the effectiveness of the proposed approach in detecting and quantifying defects in steel cables, which combines deep learning-based analysis of MFL images with the non-destructive inspection of steel cables. Full article
Show Figures

Figure 1

20 pages, 4093 KB  
Article
CNN Input Data Configuration Method for Fault Diagnosis of Three-Phase Induction Motors Based on D-Axis Current in D-Q Synchronous Reference Frame
by Yeong-Jin Goh
Appl. Sci. 2025, 15(15), 8380; https://doi.org/10.3390/app15158380 - 28 Jul 2025
Viewed by 296
Abstract
This study proposes a novel approach to input data configuration for the fault diagnosis of three-phase induction motors. Conventional neural network (CNN)-based diagnostic methods often employ three-phase current signals and apply various image transformation techniques, such as RGB mapping, wavelet transforms, and short-time [...] Read more.
This study proposes a novel approach to input data configuration for the fault diagnosis of three-phase induction motors. Conventional neural network (CNN)-based diagnostic methods often employ three-phase current signals and apply various image transformation techniques, such as RGB mapping, wavelet transforms, and short-time Fourier transform (STFT), to construct multi-channel input data. While such approaches outperform 1D-CNNs or grayscale-based 2D-CNNs due to their rich informational content, they require multi-channel data and involve an increased computational complexity. Accordingly, this study transforms the three-phase currents into the D-Q synchronous reference frame and utilizes the D-axis current (Id) for image transformation. The Id is used to generate input data using the same image processing techniques, allowing for a direct performance comparison under identical CNN architectures. Experiments were conducted under consistent conditions using both three-phase-based and Id-based methods, each applied to RGB mapping, DWT, and STFT. The classification accuracy was evaluated using a ResNet50-based CNN. Results showed that the Id-STFT achieved the highest performance, with a validation accuracy of 99.6% and a test accuracy of 99.0%. While the RGB representation of three-phase signals has traditionally been favored for its information richness and diagnostic performance, this study demonstrates that a high-performance CNN-based fault diagnosis is achievable even with grayscale representations of a single current. Full article
Show Figures

Figure 1

18 pages, 5806 KB  
Article
Optical Flow Magnification and Cosine Similarity Feature Fusion Network for Micro-Expression Recognition
by Heyou Chang, Jiazheng Yang, Kai Huang, Wei Xu, Jian Zhang and Hao Zheng
Mathematics 2025, 13(15), 2330; https://doi.org/10.3390/math13152330 - 22 Jul 2025
Viewed by 364
Abstract
Recent advances in deep learning have significantly advanced micro-expression recognition, yet most existing methods process the entire facial region holistically, struggling to capture subtle variations in facial action units, which limits recognition performance. To address this challenge, we propose the Optical Flow Magnification [...] Read more.
Recent advances in deep learning have significantly advanced micro-expression recognition, yet most existing methods process the entire facial region holistically, struggling to capture subtle variations in facial action units, which limits recognition performance. To address this challenge, we propose the Optical Flow Magnification and Cosine Similarity Feature Fusion Network (MCNet). MCNet introduces a multi-facial action optical flow estimation module that integrates global motion-amplified optical flow with localized optical flow from the eye and mouth–nose regions, enabling precise capture of facial expression nuances. Additionally, an enhanced MobileNetV3-based feature extraction module, incorporating Kolmogorov–Arnold networks and convolutional attention mechanisms, effectively captures both global and local features from optical flow images. A novel multi-channel feature fusion module leverages cosine similarity between Query and Key token sequences to optimize feature integration. Extensive evaluations on four public datasets—CASME II, SAMM, SMIC-HS, and MMEW—demonstrate MCNet’s superior performance, achieving state-of-the-art results with 92.88% UF1 and 86.30% UAR on the composite dataset, surpassing the best prior method by 1.77% in UF1 and 6.0% in UAR. Full article
(This article belongs to the Special Issue Representation Learning for Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 2900 KB  
Article
A Three-Dimensional Convolutional Neural Network for Dark Web Traffic Classification Based on Multi-Channel Image Deep Learning
by Junwei Li, Zhisong Pan and Kaolin Jiang
Computers 2025, 14(8), 295; https://doi.org/10.3390/computers14080295 - 22 Jul 2025
Viewed by 454
Abstract
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, [...] Read more.
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, localized decision boundaries, and poor generalization capacity. To improve the prediction accuracy and classification precision of dark web traffic, we propose a novel dark web traffic classification model integrating multi-channel image deep learning and a three-dimensional convolutional neural network (3D-CNN). The proposed framework leverages spatial–temporal feature fusion to enhance discriminative capability, while the 3D-CNN structure effectively captures complex traffic patterns across multiple dimensions. The experimental results show that compared to common 2D-CNN and 1D-CNN classification models, the dark web traffic classification method based on multi-channel image visual features and 3D-CNN can improve classification by 5.1% and 3.3% while maintaining a smaller total number of parameters and feature recognition parameters, effectively reducing the computational complexity of the model. In comparative experiments, 3D-CNN validates the model’s superiority in accuracy and computational efficiency compared to state-of-the-art methods, offering a promising solution for dark web traffic monitoring and security applications. Full article
Show Figures

Graphical abstract

13 pages, 3516 KB  
Article
Research on Fault Diagnosis of High-Voltage Circuit Breakers Using Gramian-Angular-Field-Based Dual-Channel Convolutional Neural Network
by Mingkun Yang, Liangliang Wei, Pengfeng Qiu, Guangfu Hu, Xingfu Liu, Xiaohui He, Zhaoyu Peng, Fangrong Zhou, Yun Zhang, Xiangyu Tan and Xuetong Zhao
Energies 2025, 18(14), 3837; https://doi.org/10.3390/en18143837 - 18 Jul 2025
Viewed by 293
Abstract
The challenge of accurately diagnosing mechanical failures in high-voltage circuit breakers is exacerbated by the non-stationary characteristics of vibration signals. This study proposes a Dual-Channel Convolutional Neural Network (DC-CNN) framework based on the Gramian Angular Field (GAF) transformation, which effectively captures both global [...] Read more.
The challenge of accurately diagnosing mechanical failures in high-voltage circuit breakers is exacerbated by the non-stationary characteristics of vibration signals. This study proposes a Dual-Channel Convolutional Neural Network (DC-CNN) framework based on the Gramian Angular Field (GAF) transformation, which effectively captures both global and local information about faults. Specifically, vibration signals from circuit breaker sensors are firstly transformed into Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF) images. These images are then combined into multi-channel inputs for parallel CNN modules to extract and fuse complementary features. Experimental validation under six operational conditions of a 220 kV high-voltage circuit breaker demonstrates that the GAF-DC-CNN method achieves a fault diagnosis accuracy of 99.02%, confirming the model’s effectiveness. This work provides substantial support for high-precision and reliable fault diagnosis in high-voltage circuit breakers within power systems. Full article
Show Figures

Figure 1

30 pages, 8543 KB  
Article
Multi-Channel Coupled Variational Bayesian Framework with Structured Sparse Priors for High-Resolution Imaging of Complex Maneuvering Targets
by Xin Wang, Jing Yang and Yong Luo
Remote Sens. 2025, 17(14), 2430; https://doi.org/10.3390/rs17142430 - 13 Jul 2025
Viewed by 349
Abstract
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the [...] Read more.
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the increasing demands for resolution and robustness, modern ISAR systems are evolving toward wideband and multi-channel architectures. In particular, multi-channel configurations based on large-scale receiving arrays have gained significant attention. In such systems, each receiving element functions as an independent spatial channel, acquiring observations from distinct perspectives. These multi-angle measurements enrich the available echo information and enhance the robustness of target imaging. However, this setup also brings significant challenges, including inter-channel coupling, high-dimensional joint signal modeling, and non-Gaussian, mixed-mode interference, which often degrade image quality and hinder reconstruction performance. To address these issues, this paper proposes a Hybrid Variational Bayesian Multi-Interference (HVB-MI) imaging algorithm based on a hierarchical Bayesian framework. The method jointly models temporal correlations and inter-channel structure, introducing a coupled processing strategy to reduce dimensionality and computational complexity. To handle complex noise environments, a Gaussian mixture model (GMM) is used to represent nonstationary mixed noise. A variational Bayesian inference (VBI) approach is developed for efficient parameter estimation and robust image recovery. Experimental results on both simulated and real-measured data demonstrate that the proposed method achieves significantly improved image resolution and noise robustness compared with existing approaches, particularly under conditions of sparse sampling or strong interference. Quantitative evaluation further shows that under the continuous sparse mode with a 75% sampling rate, the proposed method achieves a significantly higher Laplacian Variance (LV), outperforming PCSBL and CPESBL by 61.7% and 28.9%, respectively and thereby demonstrating its superior ability to preserve fine image details. Full article
Show Figures

Graphical abstract

24 pages, 4465 KB  
Article
A Deep Learning-Based Echo Extrapolation Method by Fusing Radar Mosaic and RMAPS-NOW Data
by Shanhao Wang, Zhiqun Hu, Fuzeng Wang, Ruiting Liu, Lirong Wang and Jiexin Chen
Remote Sens. 2025, 17(14), 2356; https://doi.org/10.3390/rs17142356 - 9 Jul 2025
Viewed by 516
Abstract
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress [...] Read more.
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress in radar echo extrapolation. However, most of these extrapolation network architectures are built upon convolutional neural networks, using radar echo images as input. Typically, radar echo intensity values ranging from −5 to 70 dBZ with a resolution of 5 dBZ are converted into 0–255 grayscale images from pseudo-color representations, which inevitably results in the loss of important echo details. Furthermore, as the extrapolation time increases, the smoothing effect inherent to convolution operations leads to increasingly blurred predictions. To address the algorithmic limitations of deep learning-based echo extrapolation models, this study introduces three major improvements: (1) A Deep Convolutional Generative Adversarial Network (DCGAN) is integrated into the ConvLSTM-based extrapolation model to construct a DCGAN-enhanced architecture, significantly improving the quality of radar echo extrapolation; (2) Considering that the evolution of radar echoes is closely related to the surrounding meteorological environment, the study incorporates specific physical variable products from the initial zero-hour field of RMAPS-NOW (the Rapid-update Multiscale Analysis and Prediction System—NOWcasting subsystem), developed by the Institute of Urban Meteorology, China. These variables are encoded jointly with high-resolution (0.5 dB) radar mosaic data to form multiple radar cells as input. A multi-channel radar echo extrapolation network architecture (MR-DCGAN) is then designed based on the DCGAN framework; (3) Since radar echo decay becomes more prominent over longer extrapolation horizons, this study departs from previous approaches that use a single model to extrapolate 120 min. Instead, it customizes time-specific loss functions for spatiotemporal attenuation correction and independently trains 20 separate models to achieve the full 120 min extrapolation. The dataset consists of radar composite reflectivity mosaics over North China within the range of 116.10–117.50°E and 37.77–38.77°N, collected from June to September during 2018–2022. A total of 39,000 data samples were matched with the initial zero-hour fields from RMAPS-NOW, with 80% (31,200 samples) used for training and 20% (7800 samples) for testing. Based on the ConvLSTM and the proposed MR-DCGAN architecture, 20 extrapolation models were trained using four different input encoding strategies. The models were evaluated using the Critical Success Index (CSI), Probability of Detection (POD), and False Alarm Ratio (FAR). Compared to the baseline ConvLSTM-based extrapolation model without physical variables, the models trained with the MR-DCGAN architecture achieved, on average, 18.59%, 8.76%, and 11.28% higher CSI values, 19.46%, 19.21%, and 19.18% higher POD values, and 19.85%, 11.48%, and 9.88% lower FAR values under the 20 dBZ, 30 dBZ, and 35 dBZ reflectivity thresholds, respectively. Among all tested configurations, the model that incorporated three physical variables—relative humidity (rh), u-wind, and v-wind—demonstrated the best overall performance across various thresholds, with CSI and POD values improving by an average of 16.75% and 24.75%, respectively, and FAR reduced by 15.36%. Moreover, the SSIM of the MR-DCGAN models demonstrates a more gradual decline and maintains higher overall values, indicating superior capability in preserving echo structural features. Meanwhile, the comparative experiments demonstrate that the MR-DCGAN (u, v + rh) model outperforms the MR-ConvLSTM (u, v + rh) model in terms of evaluation metrics. In summary, the model trained with the MR-DCGAN architecture effectively enhances the accuracy of radar echo extrapolation. Full article
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)
Show Figures

Figure 1

9 pages, 1553 KB  
Communication
Orthogonally Polarized Pr:LLF Red Laser at 698 nm with Tunable Power Ratio
by Haotian Huang, Menghan Jia, Yuzhao Li, Jing Xia, Nguyentuan Anh and Yanfei Lü
Photonics 2025, 12(7), 666; https://doi.org/10.3390/photonics12070666 - 1 Jul 2025
Viewed by 224
Abstract
A continuous-wave (CW) orthogonally polarized single-wavelength red laser (OPSRL) at 698 nm with a tunable power ratio within a wide range between the two polarized components was demonstrated using two Pr3+:LiLuF4 (Pr:LLF) crystals for the first time. Through control of [...] Read more.
A continuous-wave (CW) orthogonally polarized single-wavelength red laser (OPSRL) at 698 nm with a tunable power ratio within a wide range between the two polarized components was demonstrated using two Pr3+:LiLuF4 (Pr:LLF) crystals for the first time. Through control of the waist location of the pump beam in the active media, the output power ratio of the two polarized components of the OPSRL could be adjusted. Under pumping by a 20 W, 444 nm InGaN laser diode (LD), a maximum total output power of 4.12 W was achieved with equal powers for both polarized components, corresponding to an optical conversion efficiency of 23.8% relative to the absorbed pump power. Moreover, by a type-II critical phase-matched (CPM) BBO crystal, a CW ultraviolet (UV) second-harmonic generation (SHG) at 349 nm was also obtained with a maximum output power of 723 mW. OPSRLs can penetrate deep tissues and demonstrate polarization-controlled interactions, and are used in bio-sensing and industrial cutting with minimal thermal distortion, etc. The dual-polarized capability of OPSRLs also supports multi-channel imaging and high-speed interferometry. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

Back to TopTop