Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (429)

Search Parameters:
Keywords = fog generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 9679 KB  
Article
Weather-Corrupted Image Enhancement with Removal-Raindrop Diffusion and Mutual Image Translation Modules
by Young-Ho Go and Sung-Hak Lee
Mathematics 2025, 13(19), 3176; https://doi.org/10.3390/math13193176 - 3 Oct 2025
Abstract
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused [...] Read more.
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused by lighting changes impairs visibility and reduces object recognition and distance estimation accuracy. This paper proposes a diffusion framework to enhance visibility under multi-degradation conditions. The denoising diffusion probabilistic model (DDPM) offers more stable training and high-resolution restoration than the generative adversarial networks. The DDPM relies on large-scale paired datasets, which are difficult to obtain in raindrop scenarios. This framework applies the Palette diffusion model, comprising data augmentation and raindrop-removal modules. The data augmentation module generates raindrop image masks and learns inpainting-based raindrop synthesis. Synthetic masks simulate raindrop patterns and HDR imbalance scenarios. The raindrop-removal module reconfigures the Palette architecture for image-to-image translation, incorporating the augmented synthetic dataset for raindrop removal learning. Loss functions and normalization strategies improve restoration stability and removal performance. During inference, the framework operates with a single conditional input, and an efficient sampling strategy is introduced to significantly accelerate the process. In post-processing, tone adjustment and chroma compensation enhance visual consistency. The proposed method preserves fine structural details and outperforms existing approaches in visual quality, improving the robustness of vision systems under adverse conditions. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Scientific Computing)
Show Figures

Figure 1

20 pages, 3740 KB  
Article
Wildfire Target Detection Algorithms in Transmission Line Corridors Based on Improved YOLOv11_MDS
by Guanglun Lei, Jun Dong, Yi Jiang, Li Tang, Li Dai, Dengyong Cheng, Chuang Chen, Daochun Huang, Tianhao Peng, Biao Wang and Yifeng Lin
Appl. Sci. 2025, 15(19), 10688; https://doi.org/10.3390/app151910688 - 3 Oct 2025
Abstract
To address the issues of small-target missed detection, false alarms from cloud/fog interference, and low computational efficiency in traditional wildfire detection for transmission line corridors, this paper proposes a YOLOv11_MDS detection model by integrating Multi-Scale Convolutional Attention (MSCA) and Distribution-Shifted Convolution (DSConv). The [...] Read more.
To address the issues of small-target missed detection, false alarms from cloud/fog interference, and low computational efficiency in traditional wildfire detection for transmission line corridors, this paper proposes a YOLOv11_MDS detection model by integrating Multi-Scale Convolutional Attention (MSCA) and Distribution-Shifted Convolution (DSConv). The MSCA module is embedded in the backbone and neck to enhance multi-scale dynamic feature extraction of flame and smoke through collaborative depth strip convolution and channel attention. The DSConv with a quantized dynamic shift mechanism is introduced to significantly reduce computational complexity while maintaining detection accuracy. The improved model, as shown in experiments, achieves an mAP@0.5 of 88.21%, which is 2.93 percentage points higher than the original YOLOv11. It also demonstrates a 3.33% increase in recall and a frame rate of 242 FPS, with notable improvements in detecting small targets (pixel occupancy < 1%). Generalization tests demonstrate mAP improvements of 0.4% and 0.7% on benchmark datasets, effectively resolving false/missed detection in complex backgrounds. This study provides an engineering solution for real-time wildfire monitoring in transmission lines with balanced accuracy and efficiency. Full article
Show Figures

Figure 1

24 pages, 2319 KB  
Article
Droplet-Laden Flows in Multistage Compressors: An Overview of the Impact of Modeling Depth on Calculated Compressor Performance
by Silvio Geist and Markus Schatz
Int. J. Turbomach. Propuls. Power 2025, 10(4), 36; https://doi.org/10.3390/ijtpp10040036 - 2 Oct 2025
Abstract
There are various mechanisms through which water droplets can be present in compressor flows, e.g., rain ingestion in aeroengines or overspray fogging used in heavy-duty gas turbines to boost power output. For the latter, droplet evaporation within the compressor leads to a cooling [...] Read more.
There are various mechanisms through which water droplets can be present in compressor flows, e.g., rain ingestion in aeroengines or overspray fogging used in heavy-duty gas turbines to boost power output. For the latter, droplet evaporation within the compressor leads to a cooling of the flow as well as to a shift in the fluid properties, which is beneficial to the overall process. However, due to their inertia, the majority of droplets are deposited in the first stages of a multistage compressor. While this phenomenon is generally considered in CFD computations of droplet-laden flows, the subsequent re-entrainment of collected water, the formation of new droplets, and the impact on the overall evaporation are mostly neglected because of the additional computational effort required, especially with regard to the modeling of films formed by the deposited water. The work presented here shows an approach that allows for the integration of the process of droplet deposition and re-entrainment based on relatively simple correlations and experimental observations from the literature. Thus, the two-phase flow in multistage compressors can be modelled and analyzed very efficiently. In this paper, the models and assumptions used are described first, then the results of a study performed based on a generic multistage compressor are presented, whereby the various models are integrated step by step to allow an assessment of their impact on the droplet evaporation throughout the compressor and overall performance. It can be shown that evaporation becomes largely independent of droplet size when deposition on both rotor and stator and subsequent re-entrainment of collected water is considered. In addition, open issues with regard to the future improvement of models and correlations of two-phase flow phenomena are highlighted based on the results of the current investigation. Full article
32 pages, 3761 KB  
Review
Alternative and Sustainable Technologies for Freshwater Generation: From Fog Harvesting to Novel Membrane-Based Systems
by Musaddaq Azeem, Muhammad Tayyab Noman, Nesrine Amor and Michal Petru
Textiles 2025, 5(4), 43; https://doi.org/10.3390/textiles5040043 - 30 Sep 2025
Abstract
Water scarcity is an escalating global challenge, driven by climate change and population growth. With only 2.5% of Earth’s freshwater readily accessible, there is an urgent need to explore sustainable alternatives. Textile-based fog collectors are advanced tools which have shown great potential and [...] Read more.
Water scarcity is an escalating global challenge, driven by climate change and population growth. With only 2.5% of Earth’s freshwater readily accessible, there is an urgent need to explore sustainable alternatives. Textile-based fog collectors are advanced tools which have shown great potential and have gained remarkable attention across the world. This review critically evaluates emerging technologies for freshwater generation, including desalination (thermal and reverse osmosis (RO)), fog and dew harvesting, atmospheric water extraction, greywater reuse, and solar desalination systems, e.g., WaterSeer and Desolenator. Key performance metrics, e.g., water yield, energy input, and water collection efficiency, are summarized. For instance, textile-based fog harvesting devices can yield up to 103 mL/min/m2, and modern desalination systems offer 40–60% water recovery. This work provides a comparative framework to guide future implementation of water-scarcity solutions, particularly in arid and semi-arid regions. Full article
Show Figures

Figure 1

22 pages, 4173 KB  
Article
A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks
by Wuyi Qiu, Xiaoqun Cao and Shuo Ma
Remote Sens. 2025, 17(19), 3285; https://doi.org/10.3390/rs17193285 - 24 Sep 2025
Viewed by 20
Abstract
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In [...] Read more.
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In contrast, the VIIRS Day/Night Band (DNB) offers exceptional nighttime visible-like cloud imaging capabilities, offering a new solution to alleviate the overestimation issues inherent in infrared detection algorithms. Recent advances in artificial intelligence have further addressed the threshold selection problem in traditional detection methods. Leveraging these developments, this study proposes a novel generative adversarial network model incorporating attention mechanisms (SEGAN) to achieve accurate nighttime sea fog detection using DNB data. Experimental results demonstrate that SEGAN achieves satisfactory performance, with probability of detection, false alarm rate, and critical success index reaching 0.8708, 0.0266, and 0.7395, respectively. Compared with the operational infrared detection algorithm, these metrics show improvements of 0.0632, 0.0287, and 0.1587. Notably, SEGAN excels at detecting sea fog obscured by thin cloud cover, a scenario where conventional infrared detection algorithms typically fail. SEGAN emphasizes semantic consistency in its output, endowing it with enhanced robustness across varying sea fog concentrations. Full article
Show Figures

Figure 1

17 pages, 2436 KB  
Article
Deep Learning System for Speech Command Recognition
by Dejan Vujičić, Đorđe Damnjanović, Dušan Marković and Zoran Stamenković
Electronics 2025, 14(19), 3793; https://doi.org/10.3390/electronics14193793 - 24 Sep 2025
Viewed by 13
Abstract
We present a deep learning model for the recognition of speech commands in the English language. The dataset is based on the Google Speech Commands Dataset by Warden P., version 0.01, and it consists of ten distinct commands (“left”, “right”, “go”, “stop”, “up”, [...] Read more.
We present a deep learning model for the recognition of speech commands in the English language. The dataset is based on the Google Speech Commands Dataset by Warden P., version 0.01, and it consists of ten distinct commands (“left”, “right”, “go”, “stop”, “up”, “down”, “on”, “off”, “yes”, and “no”) along with additional “silence” and “unknown” classes. The dataset is split in a speaker-independent manner, with 70% of speakers assigned to the training set and 15% to the test set and validation set. All audio clips are sampled at 16 kHz, with a total of 46 146 clips. Audio files are converted into Mel spectrogram representations, which are then used as input to a deep learning model composed of a four-layer convolutional neural network followed by two fully connected layers. The model employs Rectified Linear Unit (ReLU) activation, the Adam optimizer, and dropout regularization to improve generalization. The achieved testing accuracy is 96.05%. Micro- and macro-averaged precision, recall, and F1-score of 95% are reported to reflect class-wise performance, and a confusion matrix is also provided. The proposed model has been deployed on a Raspberry Pi 5 as a Fog computing device for real-time speech recognition applications. Full article
(This article belongs to the Special Issue Data-Centric Artificial Intelligence: New Methods for Data Processing)
Show Figures

Figure 1

25 pages, 522 KB  
Article
Artificial Intelligence-Based Methods and Algorithms in Fog and Atmospheric Low-Visibility Forecasting
by Sancho Salcedo-Sanz, David Guijo-Rubio, Jorge Pérez-Aracil, César Peláez-Rodríguez, Antonio Manuel Gomez-Orellana and Pedro Antonio Gutiérrez-Peña
Atmosphere 2025, 16(9), 1073; https://doi.org/10.3390/atmos16091073 - 11 Sep 2025
Viewed by 472
Abstract
The accurate prediction of atmospheric low-visibility events due to fog, haze or atmospheric pollution is an extremely important problem, with major consequences for transportation systems, and with alternative applications in agriculture, forest ecology and ecosystems management. In this paper, we provide a comprehensive [...] Read more.
The accurate prediction of atmospheric low-visibility events due to fog, haze or atmospheric pollution is an extremely important problem, with major consequences for transportation systems, and with alternative applications in agriculture, forest ecology and ecosystems management. In this paper, we provide a comprehensive literature review and analysis of AI-based methods applied to fog and low-visibility events forecasting. We also discuss the main general issues which arise when dealing with AI-based techniques in this kind of problem, open research questions, novel AI approaches and data sources which can be exploited. Finally, the most important new AI-based methodologies which can improve atmospheric visibility forecasting are also revised, including computational experiments on the application of ordinal classification approaches to a problem of low-visibility events prediction in two Spanish airports from METAR data. Full article
(This article belongs to the Special Issue Numerical Simulation and Forecast of Fog)
Show Figures

Figure 1

30 pages, 6751 KB  
Article
Web System for Solving the Inverse Kinematics of 6DoF Robotic Arm Using Deep Learning Models: CNN and LSTM
by Mayra A. Torres-Hernández, Teodoro Ibarra-Pérez, Eduardo García-Sánchez, Héctor A. Guerrero-Osuna, Luis O. Solís-Sánchez and Ma. del Rosario Martínez-Blanco
Technologies 2025, 13(9), 405; https://doi.org/10.3390/technologies13090405 - 5 Sep 2025
Viewed by 701
Abstract
This work presents the development of a web system using deep learning (DL) neural networks to solve the inverse kinematics problem of the Quetzal robotic arm, designed for academic and research purposes. Two architectures, LSTM and CNN, were designed, trained, and evaluated using [...] Read more.
This work presents the development of a web system using deep learning (DL) neural networks to solve the inverse kinematics problem of the Quetzal robotic arm, designed for academic and research purposes. Two architectures, LSTM and CNN, were designed, trained, and evaluated using data generated through the Denavit–Hartenberg (D-H) model, considering the robot’s workspace. The evaluation employed the mean squared error (MSE) as the loss metric and mean absolute error (MAE) and accuracy as performance metrics. The CNN model, featuring four convolutional layers and an input of 4 timesteps, achieved the best overall performance (95.9% accuracy, MSE of 0.003, and MAE of 0.040), significantly outperforming the LSTM model in training time. A hybrid web application was implemented, allowing offline training and real-time online inference under one second via an interactive interface developed with Streamlit 1.16. The solution integrates tools such as TensorFlow™ 2.15, Python 3.10, and Anaconda Distribution 2023.03-1, ensuring portability to fog or cloud computing environments. The proposed system stands out for its fast response times (1 s), low computational cost, and high scalability to collaborative robotics environments. It is a viable alternative for applications in educational or research settings, particularly in projects focused on industrial automation. Full article
(This article belongs to the Special Issue AI Robotics Technologies and Their Applications)
Show Figures

Figure 1

16 pages, 5430 KB  
Article
An Optimization Placement Method of Sensors for Water Film Thickness Estimation of the Entire Airport Runway
by Juewei Cai, Rongxin Zhao, Wei Ouyang, Dehuai Yang and Mengyuan Zeng
Appl. Sci. 2025, 15(17), 9476; https://doi.org/10.3390/app15179476 - 29 Aug 2025
Viewed by 435
Abstract
This study presents an optimized methodology for the placement of water film thickness sensors, integrating information theory with experimental validation. Initially, the two-dimensional shallow-water equations are employed to simulate the spatiotemporal evolution of water film thickness across the entire runway, providing a comprehensive [...] Read more.
This study presents an optimized methodology for the placement of water film thickness sensors, integrating information theory with experimental validation. Initially, the two-dimensional shallow-water equations are employed to simulate the spatiotemporal evolution of water film thickness across the entire runway, providing a comprehensive foundational dataset. By applying information entropy theory, the total information content at each runway grid point is quantified. Analysis indicates that grid points with higher total information content generally correspond to regions of greater water film thickness. The optimal placement for a single sensor is determined by identifying the location that maximizes total information content, and its effectiveness is validated through controlled rain–fog experiments. The results demonstrate that positioning a single sensor at a site with higher water film thickness reduces the overall mean estimation error by 57%, thereby enhancing prediction accuracy. By extending the single-sensor placement framework, the total information content across all runway points is recalculated, and additional rain–fog experiments are conducted to verify the optimal locations. By incorporating a correlation coefficient–distance (C–D) model to define each sensor’s influence radius, a collaborative multi-sensor placement strategy is developed and implemented at Seletar Airport, Singapore. The findings show that sensor locations with higher water film thickness correspond to increased total information content, and that expanding the number of deployed sensors further improves estimation accuracy. Compared with conventional placement approaches, which rely on subjective judgment and long-term operational experience, the proposed method enhances estimation accuracy by over 23% when deploying two sensors. These results provide a robust basis for the strategic placement of runway water film thickness sensors and contribute to more precise assessments of pavement surface conditions. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

24 pages, 4538 KB  
Article
CNN–Transformer-Based Model for Maritime Blurred Target Recognition
by Tianyu Huang, Chao Pan, Jin Liu and Zhiwei Kang
Electronics 2025, 14(17), 3354; https://doi.org/10.3390/electronics14173354 - 23 Aug 2025
Viewed by 436
Abstract
In maritime blurred image recognition, ship collision accidents frequently result from three primary blur types: (1) motion blur from vessel movement in complex sea conditions, (2) defocus blur due to water vapor refraction, and (3) scattering blur caused by sea fog interference. This [...] Read more.
In maritime blurred image recognition, ship collision accidents frequently result from three primary blur types: (1) motion blur from vessel movement in complex sea conditions, (2) defocus blur due to water vapor refraction, and (3) scattering blur caused by sea fog interference. This paper proposes a dual-branch recognition method specifically designed for motion blur, which represents the most prevalent blur type in maritime scenarios. Conventional approaches exhibit constrained computational efficiency and limited adaptability across different modalities. To overcome these limitations, we propose a hybrid CNN–Transformer architecture: the CNN branch captures local blur characteristics, while the enhanced Transformer module models long-range dependencies via attention mechanisms. The CNN branch employs a lightweight ResNet variant, in which conventional residual blocks are substituted with Multi-Scale Gradient-Aware Residual Block (MSG-ARB). This architecture employs learnable gradient convolution for explicit local gradient feature extraction and utilizes gradient content gating to strengthen blur-sensitive region representation, significantly improving computational efficiency compared to conventional CNNs. The Transformer branch incorporates a Hierarchical Swin Transformer (HST) framework with Shifted Window-based Multi-head Self-Attention for global context modeling. The proposed method incorporates blur invariant Positional Encoding (PE) to enhance blur spectrum modeling capability, while employing DyT (Dynamic Tanh) module with learnable α parameters to replace traditional normalization layers. This architecture achieves a significant reduction in computational costs while preserving feature representation quality. Moreover, it efficiently computes long-range image dependencies using a compact 16 × 16 window configuration. The proposed feature fusion module synergistically integrates CNN-based local feature extraction with Transformer-enabled global representation learning, achieving comprehensive feature modeling across different scales. To evaluate the model’s performance and generalization ability, we conducted comprehensive experiments on four benchmark datasets: VAIS, GoPro, Mini-ImageNet, and Open Images V4. Experimental results show that our method achieves superior classification accuracy compared to state-of-the-art approaches, while simultaneously enhancing inference speed and reducing GPU memory consumption. Ablation studies confirm that the DyT module effectively suppresses outliers and improves computational efficiency, particularly when processing low-quality input data. Full article
Show Figures

Figure 1

22 pages, 2839 KB  
Article
Multi-Scale Image Defogging Network Based on Cauchy Inverse Cumulative Function Hybrid Distribution Deformation Convolution
by Lu Ji and Chao Chen
Sensors 2025, 25(16), 5088; https://doi.org/10.3390/s25165088 - 15 Aug 2025
Viewed by 445
Abstract
The aim of this study was to address the issue of significant performance degradation in existing defogging algorithms under extreme fog conditions. Traditional Taylor series-based deformable convolutions are limited by local approximation errors, while the heavy-tailed characteristics of the Cauchy distribution can more [...] Read more.
The aim of this study was to address the issue of significant performance degradation in existing defogging algorithms under extreme fog conditions. Traditional Taylor series-based deformable convolutions are limited by local approximation errors, while the heavy-tailed characteristics of the Cauchy distribution can more successfully model outliers in fog images. The following improvements are made: (1) A displacement generator based on the inverse cumulative distribution function (ICDF) of the Cauchy distribution is designed to transform uniform noise into sampling points with a long-tailed distribution. A novel double-peak Cauchy ICDF is proposed to dynamically balance the heavy-tailed characteristics of the Cauchy ICDF, enhancing the modeling capability for sudden changes in fog concentration. (2) An innovative Cauchy–Gaussian fusion module is proposed to dynamically learn and generate hybrid coefficients, combining the complementary advantages of the two distributions to dynamically balance the representation of smooth regions and edge details. (3) Tree-based multi-path and cross-resolution feature aggregation is introduced, achieving local–global feature adaptive fusion through adjustable window sizes (3/5/7/11) for parallel paths. Experiments on the RESIDE dataset demonstrate that the proposed method achieves a 2.26 dB improvement in the peak signal-to-noise ratio compared to that obtained with the TaylorV2 expansion attention mechanism, with an improvement of 0.88 dB in heavily hazy regions (fog concentration > 0.8). Ablation studies validate the effectiveness of Cauchy distribution convolution in handling dense fog and conventional lighting conditions. This study provides a new theoretical perspective for modeling in computer vision tasks, introducing a novel attention mechanism and multi-path encoding approach. Full article
Show Figures

Figure 1

13 pages, 14213 KB  
Article
All-Weather Drone Vision: Passive SWIR Imaging in Fog and Rain
by Alexander Bessonov, Aleksei Rozanov, Richard White, Galih Suwito, Ivonne Medina-Salazar, Marat Lutfullin, Dmitrii Gusev and Ilya Shikov
Drones 2025, 9(8), 553; https://doi.org/10.3390/drones9080553 - 7 Aug 2025
Viewed by 1082
Abstract
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a [...] Read more.
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a broadband 400–1700 nm setting and three sub-band filters, each at four lens apertures (f/1.8–5.6). Entropy, structural-similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were computed for every weather–aperture–filter combination. Broadband SWIR consistently outperformed all filtered configurations. The gain stems from higher photon throughput, which outweighs the modest scattering reduction offered by narrowband selection. Under passive illumination, broadband SWIR therefore represents the most robust single-camera choice for unmanned aerial vehicles (UAVs), enhancing situational awareness and flight safety in fog and rain. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

24 pages, 4519 KB  
Article
Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles
by Cristian Randieri, Sai Venkata Ganesh, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Archana Pallakonda and Christian Napoli
Drones 2025, 9(8), 549; https://doi.org/10.3390/drones9080549 - 4 Aug 2025
Cited by 3 | Viewed by 889
Abstract
Unmanned Aerial Vehicles (UAVs) have rapidly grown into different essential applications, including surveillance, disaster response, agriculture, and urban monitoring. However, for UAVS to steer safely and autonomously, the ability to detect obstacles and nearby aircraft remains crucial, especially under hard environmental conditions. This [...] Read more.
Unmanned Aerial Vehicles (UAVs) have rapidly grown into different essential applications, including surveillance, disaster response, agriculture, and urban monitoring. However, for UAVS to steer safely and autonomously, the ability to detect obstacles and nearby aircraft remains crucial, especially under hard environmental conditions. This study comprehensively analyzes the recent landscape of obstacle and aircraft detection techniques tailored for UAVs acting in difficult scenarios such as fog, rain, smoke, low light, motion blur, and disorderly environments. It starts with a detailed discussion of key detection challenges and continues with an evaluation of different sensor types, from RGB and infrared cameras to LiDAR, radar, sonar, and event-based vision sensors. Both classical computer vision methods and deep learning-based detection techniques are examined in particular, highlighting their performance strengths and limitations under degraded sensing conditions. The paper additionally offers an overview of suitable UAV-specific datasets and the evaluation metrics generally used to evaluate detection systems. Finally, the paper examines open problems and coming research directions, emphasising the demand for lightweight, adaptive, and weather-resilient detection systems appropriate for real-time onboard processing. This study aims to guide students and engineers towards developing stronger and intelligent detection systems for next-generation UAV operations. Full article
Show Figures

Figure 1

20 pages, 9888 KB  
Article
WeatherClean: An Image Restoration Algorithm for UAV-Based Railway Inspection in Adverse Weather
by Kewen Wang, Shaobing Yang, Zexuan Zhang, Zhipeng Wang, Limin Jia, Mengwei Li and Shengjia Yu
Sensors 2025, 25(15), 4799; https://doi.org/10.3390/s25154799 - 4 Aug 2025
Viewed by 546
Abstract
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, [...] Read more.
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, and fog have two main limitations: they do not adaptively learn features under varying weather complexities and struggle with managing complex noise patterns in drone inspections, leading to incomplete noise removal. To address these challenges, this study proposes a novel framework for removing rain, snow, and fog from drone images, called WeatherClean. This framework introduces a Weather Complexity Adjustment Factor (WCAF) in a parameterized adjustable network architecture to process weather degradation of varying degrees adaptively. It also employs a hierarchical multi-scale cropping strategy to enhance the recovery of fine noise and edge structures. Additionally, it incorporates a degradation synthesis method based on atmospheric scattering physical models to generate training samples that align with real-world weather patterns, thereby mitigating data scarcity issues. Experimental results show that WeatherClean outperforms existing methods by effectively removing noise particles while preserving image details. This advancement provides more reliable high-definition visual references for drone-based railway inspections, significantly enhancing inspection capabilities under complex weather conditions and ensuring the safety of railway operations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 1870 KB  
Article
Study on the Spatiotemporal Distribution Characteristics and Constitutive Relationship of Foggy Airspace in Mountainous Expressways
by Xiaolei Li, Yinxia Zhan, Tingsong Cheng and Qianghui Song
Appl. Sci. 2025, 15(15), 8615; https://doi.org/10.3390/app15158615 - 4 Aug 2025
Viewed by 311
Abstract
To study the generation and dissipation process of agglomerate fog in mountainous expressways and deeply understand the hazard mechanisms of agglomerate fog sections in mountainous expressways, based on the analysis of the geographical location characteristics of mountainous expressways and the spatial and temporal [...] Read more.
To study the generation and dissipation process of agglomerate fog in mountainous expressways and deeply understand the hazard mechanisms of agglomerate fog sections in mountainous expressways, based on the analysis of the geographical location characteristics of mountainous expressways and the spatial and temporal distribution characteristics of agglomerate fog, the airspace constitutive model of agglomerate fog in mountainous expressways was constructed based on Newton constitutive theory. Firstly, the properties of the Newtonian fluid and cluster fog were compared and analyzed, and the influence mechanism of environmental factors such as the altitude difference, topography, water system, valley effect, and vegetation on the generation and dissipation of agglomerate fog in mountainous expressways was analyzed. Based on Newton’s constitutive theory, the constitutive model of temperature, humidity, wind speed, and agglomerate fog points in the foggy airspace of the mountainous expressway was established. Then, the time and spatial distribution of fog in Chongqing and Guizhou from 2021 to 2023 were analyzed. Finally, the model was verified by using the meteorological data and fog warning data of Liupanshui City, Guizhou Province in 2023. The results show that the foggy airspace of mountainous expressways can be defined as “the space occupied by the agglomerate fog that occurs above the mountain expressway”; The temporal and spatial distribution of foggy airspace on expressways in mountainous areas is closely related to the topography, water system, vegetation distribution, and local microclimate formed by thermal radiation. The horizontal and vertical movements of the atmosphere have little influence on the foggy airspace on expressways in mountainous areas. The specific manifestation of time distribution is that the occurrence of agglomerate fog is concentrated from November to April of the following year, and the daily occurrence time is mainly concentrated between 4:00–8:00 and 18:00–22:00. The calculation results of the foggy airspace constitutive model of the expressway in the mountainous area show that when there is low surface radiation or no surface radiation, the fogging value range is [90, 100], and the fogging value range is [50, 70] when there is high surface radiation (>200), and there is generally no fog in other intervals. The research results can provide a theoretical basis for traffic safety management and control of mountainous expressway fog sections. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

Back to TopTop