sensors-logo

Journal Browser

Journal Browser

Image Processing and Pattern Recognition Based on Deep Learning for Sensing Applications—3rd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 December 2026 | Viewed by 3487

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computer Science, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania
Interests: image acquisition; image processing; feature extraction; image classification; image segmentation; artificial neural networks; deep learning; wireless sensor networks; unmanned aerial vehicles; data fusion; data processing in medicine; data processing in agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computer Science, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania
Interests: convolutional neural networks; artificial intelligence; medical image processing; biomedical optical imaging; computer vision; computerised monitoring; data acquisition; image colour analysis; texture analysis; cloud computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The pattern recognition used in analyzing and interpreting images in sensing applications is today closely tied to artificial intelligence and neural networks based on deep learning. The current trends in the use of neural networks include the following: (a) improvements within established families to enhance statistical performance and efficiency; (b) transfer learning; (c) the use of multiple networks in more complex systems; (d) the merging of decisions by individual networks; (e) the combination of efficient features with neural networks to improve detection and classification performance; and (f) the application of a multimodal approach based on data collection from various sensors. Additionally, combining neural networks with other artificial intelligence classifiers can also improve performance. New deep learning models have also been proven to improve detection, classification, and segmentation performances (for example, Visual Language Models (VLMs), Long Short-Term Memory (LSTM), Vision Transformer (ViT), and Large Language Models (LLMs)). Furthermore, sensors integrated with deep learning can improve sensorial applications in various fields, such as healthcare diagnostics, anomaly detection, traffic prediction, precision agriculture, and smart home systems, among others. Of particular importance in achieving high performance is accurate image collection by sensors. Special attention will be paid to data collection in various fields, including agriculture, medicine, environment, and restricted areas.

This Special Issue aims to publish original research contributions concerning new deep neural network-based approaches in image processing and pattern recognition for sensorial applications in various domains: remote sensing, crop monitoring, restricted zone monitoring, system support in medical diagnosis, emotion detection, and others.

The scope of the Special Issue includes (but is not limited to) the following research areas concerning image processing and pattern recognition, with the aid of new artificial intelligence techniques for sensorial applications:                                            

  • Image processing;
  • Sensors for various image generation: RGB, multispectral, thermal;
  • Collecting data and data fusion from different sensors;
  • Multimodal approaches;
  • Pattern recognition;
  • Image segmentation;
  • Object classification;
  • Neural networks;
  • Deep learning;
  • Decision fusion;
  • Systems based on multiple neural networks;
  • The detection of regions of interest from remote images;
  • Industry applications;
  • Sensorial domain applications;
  • Precision agriculture application;
  • Medical application;
  • The monitoring of protected areas;
  • Disaster monitoring and assessment.

Prof. Dr. Dan Popescu
Prof. Dr. Loretta Ichim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • sensors for various image generation: RGB, multispectral, thermal
  • collecting data and data fusion from different sensors
  • multimodal approaches
  • pattern recognition
  • image segmentation
  • object classification
  • neural networks
  • deep learning
  • decision fusion
  • systems based on multiple neural networks
  • the detection of regions of interest from remote images
  • industry applications
  • sensorial domain applications
  • precision agriculture application
  • medical application
  • the monitoring of protected areas
  • disaster monitoring and assessment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issues

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 2135 KB  
Article
SBM–Attention U-Net: A Hybrid Transformer Network for Liver Tumor Segmentation in Medical Images
by Yiru Chen, Xuefeng Li, Yang Du, Hui Jiang, Xiaohui Liu, Nan Ma and Xuemei Wang
Sensors 2026, 26(6), 1851; https://doi.org/10.3390/s26061851 - 15 Mar 2026
Viewed by 428
Abstract
This study proposes a novel liver and liver tumor segmentation model. The architecture integrates BiFormer into the bottom two layers of the Attention U-Net encoder to enhance global semantic context modeling and establish long-range pixel-wise dependencies. The proposed spatial-channel dual attention (SCDA) mechanism [...] Read more.
This study proposes a novel liver and liver tumor segmentation model. The architecture integrates BiFormer into the bottom two layers of the Attention U-Net encoder to enhance global semantic context modeling and establish long-range pixel-wise dependencies. The proposed spatial-channel dual attention (SCDA) mechanism is incorporated into the first three encoder layers to refine the fine-grained feature processing capabilities, particularly for precise delineation of liver and tumor boundaries. Eventually, a Mix Structure Block (MSB) is implemented within the decoder to optimize fusion of deep semantic and shallow spatial features, thereby elevating segmentation accuracy. Ablation experiments were conducted on three publicly available datasets. On the 3Dircadb dataset, the mean dice coefficient achieved was 0.9377 and the mean IoU Index achieved was 0.8889. On the LITS dataset, the mean dice coefficient achieved was 0.9257 and the mean IoU Index achieved was 0.8704. On the CHAOS dataset, the mean dice coefficient achieved was 0.9611 and the mean IoU Index achieved was 0.9259. These results validate the functionality and effectiveness of the proposed network model. This study constructed a novel neural network based on attention mechanisms; by enabling precise and automated segmentation directly from raw sensor-acquired medical images, the proposed method enhances the diagnostic value of these imaging sensors, facilitating more accurate clinical decision-making. Full article
Show Figures

Figure 1

29 pages, 2924 KB  
Article
Driven by Deformable Convolution and Multi-Plane Scale Constraint: A Hazy Image Dehazing–Stitching System
by Sheng Hu, Han Xiao, Cong Liu, Haina Song, Min Liu, Liang Li and Hongzhang Liu
Sensors 2026, 26(5), 1551; https://doi.org/10.3390/s26051551 - 1 Mar 2026
Viewed by 441
Abstract
Adverse weather conditions, such as fog, degrade image quality and affect the performance of deep learning-based image processing algorithms, whereas advanced driver assistance systems (ADASs) urgently demand image clarity and large-field-of-view perception in foggy environments. Existing image dehazing methods rarely consider the non-uniform [...] Read more.
Adverse weather conditions, such as fog, degrade image quality and affect the performance of deep learning-based image processing algorithms, whereas advanced driver assistance systems (ADASs) urgently demand image clarity and large-field-of-view perception in foggy environments. Existing image dehazing methods rarely consider the non-uniform and dense distribution of particles in fog, leading to severe attenuation of background information. Image stitching, owing to the low-brightness and low-texture characteristics of ADAS scenarios and differences between sensors, faces challenges such as difficult feature point extraction and matching and poor stitching quality. To address these issues, this study proposes a non-uniform dehazing method based on Deformable Convolution v4 (DCNv4), designing a DCNv4-based transform-like network to achieve long-range dependence and adaptive spatial aggregation, combined with a lightweight Retinex-inspired Transformer for color correction and structure refinement. Meanwhile, a multi-plane scale constraint module is introduced based on the LightGlue feature matching network to improve matching accuracy and homography matrix estimation precision, and an adaptive fusion stitching method is adopted to eliminate artifacts and transition zones. Experimental results show that the proposed method effectively improves feature matching accuracy and homography matrix calculation precision, achieving Peak Signal-to-Noise Ratios (PSNRs) of 22.78 dB and 24.34 dB on the NH-HAZE and BRAS datasets, respectively, which are superior to those of existing methods. This provides a reliable environmental perception solution for autonomous driving in foggy environments, verifying its effectiveness and practicality. Full article
Show Figures

Figure 1

23 pages, 4825 KB  
Article
Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution
by Huadong Liu, Haifeng Liang and Qian Wang
Sensors 2026, 26(4), 1362; https://doi.org/10.3390/s26041362 - 20 Feb 2026
Viewed by 465
Abstract
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method [...] Read more.
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method employs a dual-channel split module to decouple and encode spectral and spatial degradation information, realizes the independent mapping of spectral and spatial features via a multi-layer perceptron module, and integrates a spectral–spatial dynamic cross-attention fusion module to generate 3D dynamic blur kernels tailored to different bands and spatial positions. The proposed method designs a multi-scale spectral–spatial collaborative constraint (MSSCC) loss function to ensure the coordinated optimization of modeling rationality, spectral continuity, and spatial detail fidelity. Experiments on the CAVE and Harvard benchmark datasets demonstrate that the DADFN algorithm outperforms the baseline methods in all evaluation metrics, which proves the proposed method’s strong robustness in real-world complex degradation scenarios. This method provides a novel solution balancing physical interpretability and performance superiority for hyperspectral image super-resolution tasks and holds significant value for advancing its applications in remote sensing monitoring, precision agriculture, and other related fields. Full article
Show Figures

Figure 1

21 pages, 13872 KB  
Article
An Improved Lightweight Model for Protected Wildlife Detection in Camera Trap Images
by Zengjie Du, Dasheng Wu, Qingqing Wen, Fengya Xu, Zhongbin Liu, Cheng Li and Ruikang Luo
Sensors 2025, 25(23), 7331; https://doi.org/10.3390/s25237331 - 2 Dec 2025
Viewed by 1563
Abstract
Effective monitoring of protected wildlife is crucial for biodiversity conservation. While camera traps provide valuable data for ecological observation, existing deep learning models often suffer from low accuracy in detecting rare species and high computational costs, hindering their deployment on edge devices. To [...] Read more.
Effective monitoring of protected wildlife is crucial for biodiversity conservation. While camera traps provide valuable data for ecological observation, existing deep learning models often suffer from low accuracy in detecting rare species and high computational costs, hindering their deployment on edge devices. To address these challenges, this study proposes YOLO11-APS, an improved lightweight model for protected wildlife detection. It enhances the YOLO11n by integrating the self-Attention and Convolution (ACmix) module, the Partial Convolution (PConv) module, and the SlimNeck paradigm. These improvements strengthen feature extraction under complex conditions while reducing computational costs. Experimental results demonstrate that YOLO11-APS achieves superior detection performance compared to the baseline model, attaining a precision of 92.7%, a recall of 87.0%, an mAP@0.5 of 92.6% and an mAP@0.5:0.95 of 62.2%. In terms of model lightweighting, YOLO11-APS reduces the number of parameters, floating-point operations, and model size by 10.1%, 11.1%, and 9.5%, respectively. YOLO11-APS achieves an optimal balance between accuracy and model complexity, outperforming existing mainstream lightweight detection models. Furthermore, tests on unseen wildlife data confirm its strong transferability and robustness. This work provides an efficient deep learning tool for automated wildlife monitoring in protected areas, facilitating the development of intelligent ecological sensing systems. Full article
Show Figures

Figure 1

Back to TopTop