Artificial Intelligence Innovations in Image Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 July 2025 | Viewed by 3382

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, University of Northern British Columbia, Prince George, BC V2N 4Z9, Canada
Interests: big data; artificial intelligence; machine learning; association rule mining; natural language processing

E-Mail Website
Guest Editor
Computer Science Department, University of Northern British Columbia, Prince George, BC V2N 4Z9, Canada
Interests: computer science; election; information studies; mathematics; statistics

Special Issue Information

Dear Colleagues,

The rapid evolution of artificial intelligence (AI) technologies has profoundly impacted numerous scientific fields, with image processing standing out as a particularly transformative area. Modern AI-driven image processing techniques, such as deep learning and neural networks, have revolutionized how images are analyzed, enhanced, and interpreted across a wide range of applications, from medical diagnostics to autonomous driving. The integration of AI into image processing not only improves accuracy and efficiency but also opens up new avenues for research and application that were previously thought impossible.

This Special Issue aims to showcase the latest innovations and research in the use of AI in image processing. It seeks to bring together pioneering studies and reviews that explore cutting-edge AI technologies, their application in image processing, and the challenges and solutions associated with these advancements. The Special Issue will serve as a platform for researchers and practitioners to share their insights, findings, and predictions about the future of image processing technologies. As AI continues to evolve, its integration with image processing will likely play a pivotal role in the advancement of many scientific and industrial fields, aligning perfectly with the journal’s scope of highlighting innovative and impactful research.

Here is a list of suggested themes for contributions to this Special Issue. Please note that the themes listed here are not exhaustive, and we encourage submissions that explore other innovative aspects of AI in image processing.

  1. Advancements in Neural Networks for Image Enhancement: Studies focusing on how AI algorithms, particularly deep learning, are being used to improve image clarity, resolution, and overall quality.
  2. AI in Medical Image Processing: Research on the application of AI in diagnosing diseases, enhancing imaging techniques, and in surgical planning.
  3. Machine Learning Techniques for Real-Time Image Processing: Papers exploring the use of AI in real-time scenarios such as video surveillance and live data capture.
  4. AI-Powered Image Segmentation and Recognition: Innovative approaches to object detection, scene recognition, and segmentation tasks using AI.
  5. Integrative AI Approaches for Multidisciplinary Applications: Studies showing the application of image processing AI in areas such as environmental monitoring, automated manufacturing, and traffic management.
  6. Computer Vision and Deep Learning: Explorations into how deep learning models are advancing computer vision technologies, enhancing capabilities in areas like pattern recognition, anomaly detection, and automated analysis.
  7. Ethical Considerations and Challenges in AI-Driven Image Processing: Discussions on the implications of AI in privacy, bias, and data security within the realm of image processing.

Through this Special Issue, we aim to provide a comprehensive overview of the current state and future potential of AI in the field of image processing, fostering a deeper understanding and further innovation in this exciting and rapidly evolving field. We welcome diverse perspectives and novel methodologies that push the boundaries of what is possible in AI technologies and image analysis.

We look forward to receiving your contributions.

Dr. Fan Jiang
Prof. Dr. Liang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • image analysis
  • deep learning
  • computer vision
  • convolutional neural networks
  • image segmentation
  • real-time processing
  • machine learning algorithms
  • automated diagnosis
  • image reconstruction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 4239 KiB  
Article
Real-Time Multi-Scale Barcode Image Deblurring Based on Edge Feature Guidance
by Chenbo Shi, Xin Jiang, Xiangyu Zhang, Changsheng Zhu, Xiaowei Hu, Guodong Zhang, Yuejia Li and Chun Zhang
Electronics 2025, 14(7), 1298; https://doi.org/10.3390/electronics14071298 - 25 Mar 2025
Viewed by 189
Abstract
Barcode technology plays a crucial role in automatic identification and data acquisition systems, with extensive applications in retail, warehousing, healthcare, and industrial automation. However, barcode images often suffer from blurriness due to lighting conditions, camera quality, motion blur, and noise, adversely affecting their [...] Read more.
Barcode technology plays a crucial role in automatic identification and data acquisition systems, with extensive applications in retail, warehousing, healthcare, and industrial automation. However, barcode images often suffer from blurriness due to lighting conditions, camera quality, motion blur, and noise, adversely affecting their readability and system performance. This paper proposes a multi-scale real-time deblurring method based on edge feature guidance. Our designed multi-scale deblurring network integrates an edge feature fusion module (EFFM) to restore image edges better. Additionally, we introduce a feature filtering mechanism (FFM), which effectively suppresses noise interference by precisely filtering and enhancing critical signal features. Moreover, by incorporating wavelet reconstruction loss, the method significantly improves the restoration of details and textures. Extensive experiments on various barcode datasets demonstrate that our method significantly enhances barcode clarity and scanning accuracy, especially in noisy environments. Furthermore, our algorithm ensures robustness and real-time performance. The research results indicate that our method holds significant promise for enhancing barcode image processing, with potential applications in retail, logistics, inventory management, and industrial automation. Full article
(This article belongs to the Special Issue Artificial Intelligence Innovations in Image Processing)
Show Figures

Figure 1

20 pages, 29995 KiB  
Article
Parathyroid Gland Detection Based on Multi-Scale Weighted Fusion Attention Mechanism
by Wanling Liu, Wenhuan Lu, Yijian Li, Fei Chen, Fan Jiang, Jianguo Wei, Bo Wang and Wenxin Zhao
Electronics 2025, 14(6), 1092; https://doi.org/10.3390/electronics14061092 - 10 Mar 2025
Viewed by 396
Abstract
While deep learning techniques, such as Convolutional neural networks (CNNs), show significant potential in medical applications, real-time detection of parathyroid glands (PGs) during complex surgeries remains insufficiently explored, posing challenges for surgical accuracy and outcomes. Previous studies highlight the importance of leveraging prior [...] Read more.
While deep learning techniques, such as Convolutional neural networks (CNNs), show significant potential in medical applications, real-time detection of parathyroid glands (PGs) during complex surgeries remains insufficiently explored, posing challenges for surgical accuracy and outcomes. Previous studies highlight the importance of leveraging prior knowledge, such as shape, for feature extraction in detection tasks. However, they fail to address the critical multi-scale variability of PG objects, resulting in suboptimal performance and efficiency. In this paper, we propose an end-to-end framework, MSWF-PGD, for Multi-Scale Weighted Fusion Parathyroid Gland Detection. To improve accuracy and efficiency, our approach extracts feature maps from convolutional layers at multiple scales and re-weights them using cluster-aware multi-scale alignment, considering diverse attributes such as the size, color, and position of PGs. Additionally, we introduce Multi-Scale Aggregation to enhance scale interactions and enable adaptive multi-scale feature fusion, providing precise and informative locality information for detection. Extensive comparative experiments and ablation studies on the parathyroid dataset (PGsdata) demonstrate the proposed framework’s superiority in accuracy and real-time efficiency, outperforming state-of-the-art models such as RetinaNet, FCOS, and YOLOv8. Full article
(This article belongs to the Special Issue Artificial Intelligence Innovations in Image Processing)
Show Figures

Figure 1

19 pages, 4790 KiB  
Article
Real-Time High Dynamic Equalization Industrial Imaging Enhancement Based on Fully Convolutional Network
by Chenbo Shi, Xiangqun Ren, Yuanzheng Mo, Guodong Zhang, Shaojia Yan, Yu Wang and Changsheng Zhu
Electronics 2025, 14(3), 547; https://doi.org/10.3390/electronics14030547 - 29 Jan 2025
Viewed by 600
Abstract
Severe reflections on the surfaces of smooth objects can result in low dynamic range and uneven illumination in images, which negatively impacts downstream tasks such as defect detection and QR code recognition on images of smooth workpieces. Consequently, this paper proposes a novel [...] Read more.
Severe reflections on the surfaces of smooth objects can result in low dynamic range and uneven illumination in images, which negatively impacts downstream tasks such as defect detection and QR code recognition on images of smooth workpieces. Consequently, this paper proposes a novel approach to real-time high dynamic equalization imaging based on a fully convolutional network, termed Multi-exposure Image Fusion with Multi-dimensional Attention Mechanism and Training Storage Units (MEF-AT). Specifically, this paper innovatively proposes using training storage units, which utilize intermediate results during network training as auxiliary images, to remove uneven illumination and enhance image dynamic range effectively. Furthermore, by integrating a multi-dimensional attention mechanism into the backbone network, the model can more efficiently extract and utilize critical image information. Additionally, this paper introduces a Deep Guided Filter (DGF) with learnable parameters, which upsample the weight maps generated by the network, thus better adapting to complex industrial scenarios and producing higher quality fused images. An image evaluation metric assessing the lighting uniformity is introduced to thoroughly evaluate the proposed method’s performance. Given the lack of an MEF dataset for smooth workpieces, this paper collects a new dataset for multi-exposure fusion tasks on metallic workpieces. Our method takes less than 4 ms to run four 2K images on a GPU 3090. Both qualitative and quantitative experimental results demonstrate our method’s superior comprehensive performance in proprietary industrial and public datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Innovations in Image Processing)
Show Figures

Figure 1

20 pages, 9193 KiB  
Article
Underwater Image Enhancement Based on Difference Convolution and Gaussian Degradation URanker Loss Fine-Tuning
by Jiangzhong Cao, Zekai Zeng, Hanqiang Lao and Huan Zhang
Electronics 2024, 13(24), 5003; https://doi.org/10.3390/electronics13245003 - 19 Dec 2024
Viewed by 799
Abstract
Underwater images often suffer from degradation such as color distortion and blurring due to light absorption and scattering. It is essential to utilize underwater image enhancement (UIE) methods to acquire high-quality images. Convolutional networks are commonly used for UIE tasks, but their learning [...] Read more.
Underwater images often suffer from degradation such as color distortion and blurring due to light absorption and scattering. It is essential to utilize underwater image enhancement (UIE) methods to acquire high-quality images. Convolutional networks are commonly used for UIE tasks, but their learning capacity is still underexplored. In this paper, a UIE network based on difference convolution is proposed. Difference convolution enables the model to better capture image gradients and edge information, thereby enhancing the network’s generalization capability. To further improve performance, attention-based fusion and normalization modules are incorporated into the model. Additionally, to mitigate the impact of the absence of authentic reference images in datasets, a URanker loss module based on Gaussian degradation is proposed during the fine-tuning. The input images are subjected to Gaussian degradation, and the image quality assessment model URanker is utilized to predict the scores of the enhanced images before and after degradation. The model is further fine-tuned using the score difference between the two. Extensive experimental results validate the outstanding performance of the proposed method in UIE tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Innovations in Image Processing)
Show Figures

Figure 1

30 pages, 13159 KiB  
Article
GLMAFuse: A Dual-Stream Infrared and Visible Image Fusion Framework Integrating Local and Global Features with Multi-Scale Attention
by Fu Li, Yanghai Gu, Ming Zhao, Deji Chen and Quan Wang
Electronics 2024, 13(24), 5002; https://doi.org/10.3390/electronics13245002 - 19 Dec 2024
Viewed by 759
Abstract
Integrating infrared and visible-light images facilitates a more comprehensive understanding of scenes by amalgamating dual-sensor data derived from identical environments. Traditional CNN-based fusion techniques are predominantly confined to local feature emphasis due to their inherently limited receptive fields. Conversely, Transformer-based models tend to [...] Read more.
Integrating infrared and visible-light images facilitates a more comprehensive understanding of scenes by amalgamating dual-sensor data derived from identical environments. Traditional CNN-based fusion techniques are predominantly confined to local feature emphasis due to their inherently limited receptive fields. Conversely, Transformer-based models tend to prioritize global information, which can lead to a deficiency in feature diversity and detail retention. Furthermore, methods reliant on single-scale feature extraction are inadequate for capturing extensive scene information. To address these limitations, this study presents GLMAFuse, an innovative dual-stream encoder–decoder network, which utilizes a multi-scale attention mechanism to harmoniously integrate global and local features. This framework is designed to maximize the extraction of multi-scale features from source images while effectively synthesizing local and global information across all layers. We introduce the global-aware and local embedding (GALE) module to adeptly capture and merge global structural attributes and localized details from infrared and visible imagery via a parallel dual-branch architecture. Additionally, the multi-scale attention fusion (MSAF) module is engineered to optimize attention weights at the channel level, facilitating an enhanced synergy between high-frequency edge details and global backgrounds. This promotes effective interaction and fusion of dual-modal features. Extensive evaluations using standard datasets demonstrate that GLMAFuse surpasses the existing leading methods in both qualitative and quantitative assessments, highlighting its superior capability in infrared and visible image fusion. On the TNO and MSRS datasets, our method achieves outstanding performance across multiple metrics, including EN (7.15, 6.75), SD (46.72, 47.55), SF (12.79, 12.56), MI (2.21, 3.22), SCD (1.75, 1.80), VIF (0.79, 1.08), Qbaf (0.58, 0.71), and SSIM (0.99, 1.00). These results underscore its exceptional proficiency in infrared and visible image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence Innovations in Image Processing)
Show Figures

Figure 1

Back to TopTop