Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (286)

Search Parameters:
Keywords = densely connected convolutional networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2961 KB  
Article
An Improved Capsule Network for Image Classification Using Multi-Scale Feature Extraction
by Wenjie Huang, Ruiqing Kang, Lingyan Li and Wenhui Feng
J. Imaging 2025, 11(10), 355; https://doi.org/10.3390/jimaging11100355 - 10 Oct 2025
Viewed by 257
Abstract
In the realm of image classification, the capsule network is a network topology that packs the extracted features into many capsules, performs sophisticated capsule screening using a dynamic routing mechanism, and finally recognizes that each capsule corresponds to a category feature. Compared with [...] Read more.
In the realm of image classification, the capsule network is a network topology that packs the extracted features into many capsules, performs sophisticated capsule screening using a dynamic routing mechanism, and finally recognizes that each capsule corresponds to a category feature. Compared with previous network topologies, the capsule network has more sophisticated operations, uses a large number of parameter matrices and vectors to express picture attributes, and has more powerful image classification capabilities. However, in the practical application field, the capsule network has always been constrained by the quantity of calculation produced by the complicated structure. In the face of basic datasets, it is prone to over-fitting and poor generalization and often cannot satisfy the high computational overhead when facing complex datasets. Based on the aforesaid problems, this research proposes a novel enhanced capsule network topology. The upgraded network boosts the feature extraction ability of the network by incorporating a multi-scale feature extraction module based on proprietary star structure convolution into the standard capsule network. At the same time, additional structural portions of the capsule network are changed, and a variety of optimization approaches such as dense connection, attention mechanism, and low-rank matrix operation are combined. Image classification studies are carried out on different datasets, and the novel structure suggested in this paper has good classification performance on CIFAR-10, CIFAR-100, and CUB datasets. At the same time, we also achieved 98.21% and 95.38% classification accuracy on two complicated datasets of skin cancer ISIC derived and Forged Face EXP. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

20 pages, 162180 KB  
Article
Annotation-Efficient and Domain-General Segmentation from Weak Labels: A Bounding Box-Guided Approach
by Ammar M. Okran, Hatem A. Rashwan, Sylvie Chambon and Domenec Puig
Electronics 2025, 14(19), 3917; https://doi.org/10.3390/electronics14193917 - 1 Oct 2025
Viewed by 381
Abstract
Manual pixel-level annotation remains a major bottleneck in deploying deep learning models for dense prediction and semantic segmentation tasks across domains. This challenge is especially pronounced in applications involving fine-scale structures, such as cracks in infrastructure or lesions in medical imaging, where annotations [...] Read more.
Manual pixel-level annotation remains a major bottleneck in deploying deep learning models for dense prediction and semantic segmentation tasks across domains. This challenge is especially pronounced in applications involving fine-scale structures, such as cracks in infrastructure or lesions in medical imaging, where annotations are time-consuming, expensive, and subject to inter-observer variability. To address these challenges, this work proposes a weakly supervised and annotation-efficient segmentation framework that integrates sparse bounding-box annotations with a limited subset of strong (pixel-level) labels to train robust segmentation models. The fundamental element of the framework is a lightweight Bounding Box Encoder that converts weak annotations into multi-scale attention maps. These maps guide a ConvNeXt-Base encoder, and a lightweight U-Net–style convolutional neural network (CNN) decoder—using nearest-neighbor upsampling and skip connections—reconstructs the final segmentation mask. This design enables the model to focus on semantically relevant regions without relying on full supervision, drastically reducing annotation cost while maintaining high accuracy. We validate our framework on two distinct domains, road crack detection and skin cancer segmentation, demonstrating that it achieves performance comparable to fully supervised segmentation models using only 10–20% of strong annotations. Given the ability of the proposed framework to generalize across varied visual contexts, it has strong potential as a general annotation-efficient segmentation tool for domains where strong labeling is costly or infeasible. Full article
Show Figures

Figure 1

21 pages, 4721 KB  
Article
Automated Brain Tumor MRI Segmentation Using ARU-Net with Residual-Attention Modules
by Erdal Özbay and Feyza Altunbey Özbay
Diagnostics 2025, 15(18), 2326; https://doi.org/10.3390/diagnostics15182326 - 13 Sep 2025
Viewed by 650
Abstract
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving [...] Read more.
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving segmentation accuracy and generalization. Methods: We propose Attention Res-UNet (ARU-Net), a novel Deep Learning (DL) architecture integrating residual connections, Adaptive Channel Attention (ACA), and Dimensional-space Triplet Attention (DTA) modules. The encoding module efficiently extracts and refines relevant feature information by applying ACA to the lower layers of convolutional and residual blocks. The DTA is fixed to the upper layers of the decoding module, decoupling channel weights to better extract and fuse multi-scale features, enhancing both performance and efficiency. Input MRI images are pre-processed using Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, denoising filters, and Linear Kuwahara filtering to preserve edges while smoothing homogeneous regions. The network is trained using categorical cross-entropy loss with the Adam optimizer on the BTMRII dataset, and comparative experiments are conducted against baseline U-Net, DenseNet121, and Xception models. Performance is evaluated using accuracy, precision, recall, F1-score, Dice Similarity Coefficient (DSC), and Intersection over Union (IoU) metrics. Results: Baseline U-Net showed significant performance gains after adding residual connections and ACA modules, with DSC improving by approximately 3.3%, accuracy by 3.2%, IoU by 7.7%, and F1-score by 3.3%. ARU-Net further enhanced segmentation performance, achieving 98.3% accuracy, 98.1% DSC, 96.3% IoU, and a superior F1-score, representing additional improvements of 1.1–2.0% over the U-Net + Residual + ACA variant. Visualizations confirmed smoother boundaries and more precise tumor contours across all six tumor classes, highlighting ARU-Net’s ability to capture heterogeneous tumor structures and fine structural details more effectively than both baseline U-Net and other conventional DL models. Conclusions: ARU-Net, combined with an effective pre-processing strategy, provides a highly reliable and precise solution for automated brain tumor segmentation. Its improvements across multiple evaluation metrics over U-Net and other conventional models highlight its potential for clinical application and contribute novel insights to medical image analysis research. Full article
(This article belongs to the Special Issue Advances in Functional and Structural MR Image Analysis)
Show Figures

Figure 1

21 pages, 9664 KB  
Article
A Detection Approach for Wheat Spike Recognition and Counting Based on UAV Images and Improved Faster R-CNN
by Donglin Wang, Longfei Shi, Huiqing Yin, Yuhan Cheng, Shaobo Liu, Siyu Wu, Guangguang Yang, Qinge Dong, Jiankun Ge and Yanbin Li
Plants 2025, 14(16), 2475; https://doi.org/10.3390/plants14162475 - 9 Aug 2025
Cited by 1 | Viewed by 625
Abstract
This study presents an innovative unmanned aerial vehicle (UAV)-based intelligent detection method utilizing an improved Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture to address the inefficiency and inaccuracy inherent in manual wheat spike counting. We systematically collected a high-resolution image dataset (2000 [...] Read more.
This study presents an innovative unmanned aerial vehicle (UAV)-based intelligent detection method utilizing an improved Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture to address the inefficiency and inaccuracy inherent in manual wheat spike counting. We systematically collected a high-resolution image dataset (2000 images, 4096 × 3072 pixels) covering key growth stages (heading, grain filling, and maturity) of winter wheat (Triticum aestivum L.) during 2022–2023 using a DJI M300 RTK equipped with multispectral sensors. The dataset encompasses diverse field scenarios under five fertilization treatments (organic-only, organic–inorganic 7:3 and 3:7 ratios, inorganic-only, and no fertilizer) and two irrigation regimes (full and deficit irrigation), ensuring representativeness and generalizability. For model development, we replaced conventional VGG16 with ResNet-50 as the backbone network, incorporating residual connections and channel attention mechanisms to achieve 92.1% mean average precision (mAP) while reducing parameters from 135 M to 77 M (43% decrease). The GFLOPS of the improved model has been reduced from 1.9 to 1.7, an decrease of 10.53%, and the computational efficiency of the model has been improved. Performance tests demonstrated a 15% reduction in missed detection rate compared to YOLOv8 in dense canopies, with spike count regression analysis yielding R2 = 0.88 (p < 0.05) against manual measurements and yield prediction errors below 10% for optimal treatments. To validate robustness, we established a dedicated 500-image test set (25% of total data) spanning density gradients (30–80 spikes/m2) and varying illumination conditions, maintaining >85% accuracy even under cloudy weather. Furthermore, by integrating spike recognition with agronomic parameters (e.g., grain weight), we developed a comprehensive yield estimation model achieving 93.5% accuracy under optimal water–fertilizer management (70% ETc irrigation with 3:7 organic–inorganic ratio). This work systematically addresses key technical challenges in automated spike detection through standardized data acquisition, lightweight model design, and field validation, offering significant practical value for smart agriculture development. Full article
(This article belongs to the Special Issue Plant Phenotyping and Machine Learning)
Show Figures

Figure 1

11 pages, 2515 KB  
Article
DynseNet: A Dynamic Dense-Connection Neural Network for Land–Sea Classification of Radar Targets
by Jingang Wang, Tong Xiao, Kang Chen and Peng Liu
Appl. Sci. 2025, 15(15), 8703; https://doi.org/10.3390/app15158703 - 6 Aug 2025
Viewed by 400
Abstract
Radar is one of the primary means of monitoring maritime targets. Compared to electro-optical systems, radar offers the advantage of all-weather, day-and-night operation. However, existing radar target detection algorithms predominantly achieve binary detection (i.e., determining the presence or absence of a target) and [...] Read more.
Radar is one of the primary means of monitoring maritime targets. Compared to electro-optical systems, radar offers the advantage of all-weather, day-and-night operation. However, existing radar target detection algorithms predominantly achieve binary detection (i.e., determining the presence or absence of a target) and are unable to accurately classify target types. This limitation is particularly significant for coastal-deployed maritime surveillance radars, which must contend with not only maritime vessels but also various land-based and island targets within their monitoring range. This paper aims to enhance the informational breadth of existing binary detection methods by proposing a land–sea classification method of radar targets based on dynamic dense connections. The core idea behind this method is to merge the interlayer output features of the network and to augment and weigh them through dynamic convolutional combinations to improve the feature extraction capability of the network. The experimental results demonstrate that the proposed attribute recognition method outperforms current deep network architectures. Full article
Show Figures

Figure 1

29 pages, 9069 KB  
Article
Prediction of Temperature Distribution with Deep Learning Approaches for SM1 Flame Configuration
by Gökhan Deveci, Özgün Yücel and Ali Bahadır Olcay
Energies 2025, 18(14), 3783; https://doi.org/10.3390/en18143783 - 17 Jul 2025
Viewed by 631
Abstract
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST [...] Read more.
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST k-ω turbulence model. The first approach employs a fully connected dense neural network to directly map scalar input parameters—fuel velocity, swirl ratio, and equivalence ratio—to high-resolution temperature contour images. In addition, a comparison was made with different deep learning networks, namely Res-Net, EfficientNetB0, and Inception Net V3, to better understand the performance of the model. In the first approach, the results of the Inception V3 model and the developed Dense Model were found to be better than Res-Net and Efficient Net. At the same time, file sizes and usability were examined. The second framework employs a U-Net-based convolutional neural network enhanced by an RGB Fusion preprocessing technique, which integrates multiple scalar fields from non-reacting (cold flow) conditions into composite images, significantly improving spatial feature extraction. The training and validation processes for both models were conducted using 80% of the CFD data for training and 20% for testing, which helped assess their ability to generalize new input conditions. In the secondary approach, similar to the first approach, studies were conducted with different deep learning models, namely Res-Net, Efficient Net, and Inception Net, to evaluate model performance. The U-Net model, which is well developed, stands out with its low error and small file size. The dense network is appropriate for direct parametric analyses, while the image-based U-Net model provides a rapid and scalable option to utilize the cold flow CFD images. This framework can be further refined in future research to estimate more flow factors and tested against experimental measurements for enhanced applicability. Full article
Show Figures

Figure 1

15 pages, 1816 KB  
Article
A Framework for User Traffic Prediction and Resource Allocation in 5G Networks
by Ioannis Konstantoulas, Iliana Loi, Dimosthenis Tsimas, Kyriakos Sgarbas, Apostolos Gkamas and Christos Bouras
Appl. Sci. 2025, 15(13), 7603; https://doi.org/10.3390/app15137603 - 7 Jul 2025
Viewed by 1339
Abstract
Fifth-Generation (5G) networks deal with dynamic fluctuations in user traffic and the demands of each connected user and application. This creates a need for optimized resource allocation to reduce network congestion in densely populated urban centers and further ensure Quality of Service (QoS) [...] Read more.
Fifth-Generation (5G) networks deal with dynamic fluctuations in user traffic and the demands of each connected user and application. This creates a need for optimized resource allocation to reduce network congestion in densely populated urban centers and further ensure Quality of Service (QoS) in (5G) environments. To address this issue, we present a framework for both predicting user traffic and allocating users to base stations in 5G networks using neural network architectures. This framework consists of a hybrid approach utilizing a Long Short-Term Memory (LSTM) network or a Transformer architecture for user traffic prediction in base stations, as well as a Convolutional Neural Network (CNN) to allocate users to base stations in a realistic scenario. The models show high accuracy in the tasks performed, especially in the user traffic prediction task, where the models show an accuracy of over 99%. Overall, our framework is capable of capturing long-term temporal features and spatial features from 5G user data, taking a significant step towards a holistic approach in data-driven resource allocation and traffic prediction in 5G networks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 7615 KB  
Article
A Glacier Ice Thickness Estimation Method Based on Deep Convolutional Neural Networks
by Zhiqiang Li, Jia Li, Xuyan Ma, Lei Guo, Long Li, Jiahao Dian, Lingshuai Kong and Huiguo Ye
Geosciences 2025, 15(7), 242; https://doi.org/10.3390/geosciences15070242 - 27 Jun 2025
Viewed by 850
Abstract
Ice thickness is a key parameter for glacier mass estimations and glacier dynamics simulations. Multiple physical models have been developed by glaciologists to estimate glacier ice thickness. However, obtaining internal and basal glacier parameters required by physical models is challenging, often leading to [...] Read more.
Ice thickness is a key parameter for glacier mass estimations and glacier dynamics simulations. Multiple physical models have been developed by glaciologists to estimate glacier ice thickness. However, obtaining internal and basal glacier parameters required by physical models is challenging, often leading to simplified models that struggle to capture the nonlinear characteristics of ice flow and resulting in significant uncertainties. To address this, this study proposes a convolutional neural network (CNN)-based deep learning model for glacier ice thickness estimation, named the Coordinate-Attentive Dense Glacier Ice Thickness Estimate Model (CADGITE). Based on in situ ice thickness measurements in the Swiss Alps, a CNN is designed to estimate glacier ice thickness by incorporating a new architecture that includes a Residual Coordinate Attention Block together with a Dense Connected Block, using the distance to glacier boundaries as a complement to inputs that include surface velocity, slope, and hypsometry. Taking ground-penetrating radar (GPR) measurements as a reference, the proposed model achieves a mean absolute deviation (MAD) of 24.28 m and a root mean square error (RMSE) of 37.95 m in Switzerland, outperforming mainstream physical models. When applied to 14 glaciers in High Mountain Asia, the model achieves an MAD of 20.91 m and an RMSE of 27.26 m compared to reference measurements, also exhibiting better performance than mainstream physical models. These comparisons demonstrate the good accuracy and cross-regional transferability of our approach, highlighting the potential of using deep learning-based methods for larger-scale glacier ice thickness estimation. Full article
(This article belongs to the Section Climate and Environment)
Show Figures

Figure 1

20 pages, 4198 KB  
Article
HiDRA-DCDNet: Dynamic Hierarchical Attention and Multi-Scale Context Fusion for Real-Time Remote Sensing Small-Target Detection
by Jiale Wang, Zhe Bai, Ximing Zhang, Yuehong Qiu, Fan Bu and Yuancheng Shao
Remote Sens. 2025, 17(13), 2195; https://doi.org/10.3390/rs17132195 - 25 Jun 2025
Viewed by 592
Abstract
Small-target detection in remote sensing presents three fundamental challenges: limited pixel representation of targets, multi-angle imaging-induced appearance variance, and complex background interference. This paper introduces a dual-component neural architecture comprising Hierarchical Dynamic Refinement Attention (HiDRA) and Densely Connected Dilated Block (DCDBlock) to address [...] Read more.
Small-target detection in remote sensing presents three fundamental challenges: limited pixel representation of targets, multi-angle imaging-induced appearance variance, and complex background interference. This paper introduces a dual-component neural architecture comprising Hierarchical Dynamic Refinement Attention (HiDRA) and Densely Connected Dilated Block (DCDBlock) to address these challenges systematically. The HiDRA mechanism implements a dual-phase feature enhancement process: channel competition through bottleneck compression for discriminative feature selection, followed by spatial-semantic reweighting for foreground–background decoupling. The DCDBlock architecture synergizes multi-scale dilated convolutions with cross-layer dense connections, establishing persistent feature propagation pathways that preserve critical spatial details across network depths. Extensive experiments on AI-TOD, VisDrone, MAR20, and DOTA-v1.0 datasets demonstrate our method’s consistent superiority, achieving average absolute gains of +1.16% (mAP50), +0.93% (mAP95), and +1.83% (F1-score) over prior state-of-the-art approaches across all benchmarks. With 8.1 GFLOPs computational complexity and 2.6 ms inference speed per image, our framework demonstrates practical efficacy for real-time remote sensing applications, achieving superior accuracy–efficiency trade-off compared to existing approaches. Full article
Show Figures

Figure 1

22 pages, 2286 KB  
Article
GPR-Based Leakage Reconstruction of Shallow-Buried Water Supply Pipelines Using an Improved UNet++ Network
by Qingqi Xu, Qinghua Liu and Shan Ouyang
Remote Sens. 2025, 17(13), 2174; https://doi.org/10.3390/rs17132174 - 25 Jun 2025
Viewed by 524
Abstract
Ground-penetrating radar (GPR) plays a critical role in detecting underground targets, particularly locating and characterizing leaks in buried pipelines. However, the complex nature of GPR images related to pipeline leaks, combined with the limitations of existing neural network-based inversion methods, such as insufficient [...] Read more.
Ground-penetrating radar (GPR) plays a critical role in detecting underground targets, particularly locating and characterizing leaks in buried pipelines. However, the complex nature of GPR images related to pipeline leaks, combined with the limitations of existing neural network-based inversion methods, such as insufficient feature extraction and low inversion accuracy, poses significant challenges for effective leakage reconstruction. To address these challenges, this paper proposes an enhanced UNet++-based model: the Multi-Scale Directional Network PlusPlus (MSDNet++). The network employs an encoder–decoder architecture, in which the encoder incorporates multi-scale directional convolutions with coordinate attention to extract and compress features across different scales effectively. The decoder fuses multi-level features through dense skip connections and further enhances the representation of critical information via coordinate attention, enabling the accurate inversion of dielectric constant images. Experimental results on both simulated and real-world data demonstrate that MSDNet++ can accurately invert the location and extent of buried pipeline leaks from GPR B-scan images. Full article
(This article belongs to the Special Issue Advanced Ground-Penetrating Radar (GPR) Technologies and Applications)
Show Figures

Figure 1

19 pages, 1306 KB  
Article
Root Cause Analysis of Cast Product Defects with Two-Branch Reasoning Network Based on Continuous Casting Quality Knowledge Graph
by Xiaojun Wu, Xinyi Wang, Yue She, Mengmeng Sun and Qi Gao
Appl. Sci. 2025, 15(13), 6996; https://doi.org/10.3390/app15136996 - 20 Jun 2025
Viewed by 759
Abstract
A variety of cast product defects may occur in the continuous casting process. By establishing a Continuous Casting Quality Knowledge Graph (C2Q-KG) focusing on the causes of cast product defects, enterprises can systematically sort out and express the relations between various production factors [...] Read more.
A variety of cast product defects may occur in the continuous casting process. By establishing a Continuous Casting Quality Knowledge Graph (C2Q-KG) focusing on the causes of cast product defects, enterprises can systematically sort out and express the relations between various production factors and cast product defects, which makes the reasoning process for the causes of cast product defects more objective and comprehensive. However, reasoning schemes for general KGs often use the same processing method to deal with different types of relations, without considering the difference in the number distribution of the head and tail entities in the relation, leading to a decrease in reasoning accuracy. In order to improve the reasoning accuracy of C2Q-KGs, this paper proposes a model based on a two-branch reasoning network. Our model classifies the continuous casting triples according to the number distribution of the head and tail entities in the relation and connects a two-branch reasoning network consisting of one connection layer and one capsule layer behind the convolutional layer. The connection layer is used to deal with the sparsely distributed entity-side reasoning task in the triple, while the capsule layer is used to deal with the densely distributed entity-side reasoning task in the triple. In addition, the Graph Attention Network (GAT) is introduced to enable our model to better capture the complex information hidden in the neighborhood of each entity and improve the overall reasoning accuracy. The experimental results show that compared with other cutting-edge methods on the continuous casting data set, our model significantly improves performance and infers more accurate root causes of cast product defects, which provides powerful guidance for enterprise production. Full article
Show Figures

Figure 1

17 pages, 956 KB  
Article
Comparative Analysis of Attention Mechanisms in Densely Connected Network for Network Traffic Prediction
by Myeongjun Oh, Sung Oh, Jongkyung Im, Myungho Kim, Joung-Sik Kim, Ji-Yeon Park, Na-Rae Yi and Sung-Ho Bae
Signals 2025, 6(2), 29; https://doi.org/10.3390/signals6020029 - 19 Jun 2025
Viewed by 1028
Abstract
Recently, STDenseNet (SpatioTemporal Densely connected convolutional Network) showed remarkable performance in predicting network traffic by leveraging the inductive bias of convolution layers. However, it is known that such convolution layers can only barely capture long-term spatial and temporal dependencies. To solve this problem, [...] Read more.
Recently, STDenseNet (SpatioTemporal Densely connected convolutional Network) showed remarkable performance in predicting network traffic by leveraging the inductive bias of convolution layers. However, it is known that such convolution layers can only barely capture long-term spatial and temporal dependencies. To solve this problem, we propose Attention-DenseNet (ADNet), which effectively incorporates an attention module into STDenseNet to learn representations for long-term spatio-temporal patterns. Specifically, we explored the optimal positions and the types of attention modules in combination with STDenseNet. Our key findings are as follows: i) attention modules are very effective when positioned between the last dense module and the final feature fusion module, meaning that the attention module plays a key role in aggregating low-level local features with long-term dependency. Hence, the final feature fusion module can easily exploit both global and local information; ii) the best attention module is different depending on the spatio-temporal characteristics of the dataset. To verify the effectiveness of the proposed ADNet, we performed experiments on the Telecom Italia dataset, a well-known benchmark dataset for network traffic prediction. The experimental results show that, compared to STDenseNet, our ADNet improved RMSE performance by 3.72%, 2.84%, and 5.87% in call service (Call), short message service (SMS), and Internet access (Internet) sub-datasets, respectively. Full article
Show Figures

Figure 1

27 pages, 2547 KB  
Article
Comparative Analysis of Hybrid Deep Learning Models for Electricity Load Forecasting During Extreme Weather
by Altan Unlu and Malaquias Peña
Energies 2025, 18(12), 3068; https://doi.org/10.3390/en18123068 - 10 Jun 2025
Cited by 1 | Viewed by 1143
Abstract
Extreme weather events present some of the most severe natural threats to the electric grid, and accurate load forecasting during those events is essential for grid management and disaster preparedness. In this study, we evaluate the effectiveness of hybrid deep learning (DL) models [...] Read more.
Extreme weather events present some of the most severe natural threats to the electric grid, and accurate load forecasting during those events is essential for grid management and disaster preparedness. In this study, we evaluate the effectiveness of hybrid deep learning (DL) models for electrical load forecasting in the IEEE 118-bus system. Our analysis focuses on the Connecticut region during extreme weather. In addition, we determine multivariate models capable of multi-input and multi-output forecasting while incorporating weather data to improve forecasting accuracy. This research is divided into two case studies that analyze different combined DL model architectures. Case Study 1 conducts CNN-Recurrent (RNN, LSTM, GRU, BiRNN, BiGRU, and BiLSTM) models with fully connected dense layers, which combine convolution and recurrent neural networks to capture both spatial and temporal dependencies in the data. Case Study 2 evaluates Hybrid CNN-Recurrent models with a fully connected dense layer model that incorporates a flattening step before the recurrent layers to increase the temporal learning process. Based on the results obtained from our simulations, the hybrid CNN-GRU-FC (using BiGRU) model in Case Study 2 obtained the best performance with an RMSE of 9.112 MW and MAPE of 11.68% during the hurricane period. The Hybrid CNN-GRU-FC model presents a better accuracy of bidirectional recurrent models for load forecasting under extreme weather conditions. Full article
(This article belongs to the Special Issue Machine Learning for Energy Load Forecasting)
Show Figures

Figure 1

16 pages, 2979 KB  
Article
CNN-Assisted Effective Radar Active Jamming Suppression in Ultra-Low Signal-to-Jamming Ratio Conditions Using Bandwidth Enhancement
by Linbo Zhang, Xiuting Zou, Shaofu Xu, Mengmeng Chai, Wenbin Lu, Zhenbin Lv and Weiwen Zou
Electronics 2025, 14(11), 2296; https://doi.org/10.3390/electronics14112296 - 5 Jun 2025
Viewed by 714
Abstract
In complex scenarios, radar echoes are contaminated by strong jamming, which significantly degrades their detection. Target detection under ultra-low signal-to-jamming ratio (SJR) conditions has thus become a major challenge when confronted with active jamming represented by smeared spectrum (SMSP) noise. Traditional jamming suppression [...] Read more.
In complex scenarios, radar echoes are contaminated by strong jamming, which significantly degrades their detection. Target detection under ultra-low signal-to-jamming ratio (SJR) conditions has thus become a major challenge when confronted with active jamming represented by smeared spectrum (SMSP) noise. Traditional jamming suppression methods are often limited by model dependency and useful signal loss. Convolutional neural networks (CNNs) have gained significant attention as an effective method for jamming suppression. However, in an ultra-low SJR environment, CNNs would have difficulty in carrying out jamming suppression, resulting in poor signal quality. In this study, we utilize a bandwidth enhancement method to allow CNNs to perform effective radar active jamming suppression in ultra-low SJR environments. Specifically, the bandwidth enhancement method reduces the correlation between target and jamming signals, which yields higher-quality target range profiles. Consequently, a modified CNN featuring a dense connection module can effectively suppress jamming even in ultra-low SJR scenarios. The experimental results show that when the input SJR is −30 dB and the bandwidth is 1.2 GHz, the output SJR reaches 13.25 dB. Meanwhile, the improvement factor (IF) gradually increases and reaches saturation at ~15 dB. Building on the bandwidth enhancement method, the modified CNN further improves the IF by ~27 dB. This work is expected to offer a new technical pathway for suppressing radar active jamming in ultra-low SJR scenarios. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

17 pages, 11290 KB  
Article
Learning to Utilize Multi-Scale Feature Information for Crisp Power Line Detection
by Kai Li, Min Liu, Feiran Wang, Xinyang Guo, Geng Han, Xiangnan Bai and Changsong Liu
Electronics 2025, 14(11), 2175; https://doi.org/10.3390/electronics14112175 - 27 May 2025
Viewed by 477
Abstract
Power line detection (PLD) is a crucial task in the electric power industry where accurate PLD forms the foundation for achieving automated inspections. However, recent top-performing power line detection methods tend to generate thick and noisy edge lines, adding to the difficulties of [...] Read more.
Power line detection (PLD) is a crucial task in the electric power industry where accurate PLD forms the foundation for achieving automated inspections. However, recent top-performing power line detection methods tend to generate thick and noisy edge lines, adding to the difficulties of subsequent tasks. In this work, we propose a multi-scale feature-based PLD method named LUM-Net to allow for the detection of power lines in a crisp and precise way. The algorithm utilizes EfficientNetV1 as the backbone network, ensuring effective feature extraction across various scales. We developed a Coordinated Convolutional Block Attention Module (CoCBAM) to focus on critical features by emphasizing both channel-wise and spatial information, thereby refining the power lines and reducing noise. Furthermore, we constructed the Bi-Large Kernel Convolutional Block (BiLKB) as the decoder, leveraging large kernel convolutions and spatial selection mechanisms to capture more contextual information, supplemented by auxiliary small kernels to refine the extracted feature information. By integrating these advanced components into a top-down dense connection mechanism, our method achieves effective, multi-scale information interaction, significantly improving the overall performance. The experimental results show that our method can predict crisp power line maps and achieve state-of-the-art performance on the PLDU dataset (ODS = 0.969) and PLDM dataset (ODS = 0.943). Full article
Show Figures

Figure 1

Back to TopTop