Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = CFENet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2302 KB  
Article
Context-Aware Feature Enhancement Network for Remote Sensing Image Semantic Segmentation
by Shufen Ruan, Quan Wan, Ruijuan Chen, Mengyang Hu, Xiuya Guo and Kunfang Song
Remote Sens. 2026, 18(4), 543; https://doi.org/10.3390/rs18040543 - 8 Feb 2026
Viewed by 394
Abstract
Semantic segmentation of remote sensing images plays a crucial role in accurate land-cover classification and environmental monitoring. However, existing semantic segmentation networks still struggle with multiscale feature extraction and context modeling. To address these challenges, this paper proposes a novel semantic segmentation network, [...] Read more.
Semantic segmentation of remote sensing images plays a crucial role in accurate land-cover classification and environmental monitoring. However, existing semantic segmentation networks still struggle with multiscale feature extraction and context modeling. To address these challenges, this paper proposes a novel semantic segmentation network, termed Context-aware Feature Enhancement Network (CFENet). Specifically, we design a Wavelet-Based Pyramid Pooling Module (WPPM) based on Haar wavelet downsampling (HWD) to enhance the model’s ability to extract multiscale features. Meanwhile, an Adaptive Context Enhancement (ACE) module is introduced to adaptively focus on semantically significant regions, enabling joint enhancement along both spatial and channel dimensions. In addition, we develop a Multiscale Feature Reconstruction (MFR) module that performs multiscale decoding on the output of ACE in the decoding stage to further improve segmentation accuracy. The effectiveness of CFENet is validated on two benchmark datasets: ISPRS Vaihingen and ISPRS Potsdam. Experimental results show that, compared to baseline models, CFENet improves the mF1 by 3.16% and 2.38%, the OA by 1.54% and 2.80%, and the mIoU by 4.44% and 3.91%, respectively. Moreover, CFENet achieves reliable and satisfactory performance when evaluated against several representative mainstream methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

24 pages, 42443 KB  
Article
YOLO-GP: A Multi-Scale Dangerous Behavior Detection Model Based on YOLOv8
by Bushi Liu, Cuiying Yu, Bolun Chen and Yue Zhao
Symmetry 2024, 16(6), 730; https://doi.org/10.3390/sym16060730 - 12 Jun 2024
Cited by 6 | Viewed by 3198
Abstract
In recent years, frequent chemical production safety incidents in China have been primarily attributed to dangerous behaviors by workers. Current monitoring methods predominantly rely on manual supervision, which is not only inefficient but also prone to errors in complex environments and with varying [...] Read more.
In recent years, frequent chemical production safety incidents in China have been primarily attributed to dangerous behaviors by workers. Current monitoring methods predominantly rely on manual supervision, which is not only inefficient but also prone to errors in complex environments and with varying target scales, leading to missed or incorrect detections. To address this issue, we propose a deep learning-based object detection model, YOLO-GP. First, we utilize a grouped pointwise convolutional (GPConv) module of symmetric structure to facilitate information exchange and feature fusion in the channel dimension, thereby extracting more accurate feature representations. Building upon the YOLOv8n model, we integrate the symmetric structure convolutional GPConv module and design the dual-branch aggregation module (DAM) and Efficient Spatial Pyramid Pooling (ESPP) module to enhance the richness of gradient flow information and the capture of multi-scale features, respectively. Finally, we develop a channel feature enhancement network (CFE-Net) to strengthen inter-channel interactions, improving the model’s performance in complex scenarios. Experimental results demonstrate that YOLO-GP achieves a 1.56% and 11.46% improvement in the mAP@.5:.95 metric on a custom dangerous behavior dataset and a public Construction Site Safety Image Dataset, respectively, compared to the baseline model. This highlights its superiority in dangerous behavior object detection tasks. Furthermore, the enhancement in model performance provides an effective solution for improving accuracy and robustness, promising significant practical applications. Full article
Show Figures

Figure 1

19 pages, 9695 KB  
Article
A Context Feature Enhancement Network for Building Extraction from High-Resolution Remote Sensing Imagery
by Jinzhi Chen, Dejun Zhang, Yiqi Wu, Yilin Chen and Xiaohu Yan
Remote Sens. 2022, 14(9), 2276; https://doi.org/10.3390/rs14092276 - 9 May 2022
Cited by 47 | Viewed by 6587
Abstract
The complexity and diversity of buildings make it challenging to extract low-level and high-level features with strong feature representation by using deep neural networks in building extraction tasks. Meanwhile, deep neural network-based methods have many network parameters, which take up a lot of [...] Read more.
The complexity and diversity of buildings make it challenging to extract low-level and high-level features with strong feature representation by using deep neural networks in building extraction tasks. Meanwhile, deep neural network-based methods have many network parameters, which take up a lot of memory and time in training and testing. We propose a novel fully convolutional neural network called the Context Feature Enhancement Network (CFENet) to address these issues. CFENet comprises three modules: the spatial fusion module, the focus enhancement module, and the feature decoder module. First, the spatial fusion module aggregates the spatial information of low-level features to obtain buildings’ outline and edge information. Secondly, the focus enhancement module fully aggregates the semantic information of high-level features to filter the information of building-related attribute categories. Finally, the feature decoder module decodes the output of the above two modules to segment the buildings more accurately. In a series of experiments on the WHU Building Dataset and the Massachusetts Building Dataset, our CFENet balances efficiency and accuracy compared to the other four methods we compared, and achieves optimality on all five evaluation metrics: PA, PC, F1, IoU, and FWIoU. This indicates that CFENet can effectively enhance and fuse buildings’ low-level and high-level features, improving building extraction accuracy. Full article
Show Figures

Figure 1

Back to TopTop