Deep Learning-Based Image Restoration and Object Identification

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 4486

Special Issue Editors

Key Laboratory of Manufacturing Industrial Integrated, Shenyang University, Shenyang 110044, China
Interests: image restoration; object tracking and re-identification

E-Mail
Guest Editor
School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518055, China
Interests: object tracking; action recognition; image restoration

E-Mail Website
Guest Editor
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
Interests: image restoration; object detection and tracking

Special Issue Information

Dear Colleagues,

Image restoration/object identification is one of the challenging tasks of computer vision applications. Image restoration is essential to guarantee the success of subsequent stages of computer vision applications, such as detection and segmentation, since it can recover useful textural and structural information and eliminate the effect of irrelevant information. Object identification is a computer vision technology that deals with recognizing instances of semantic objects (such as humans, buildings, or cars) in images and videos. Object identification has attracted increasing attention in recent years due to its wide range of applications, such as security monitoring, autonomous driving, transportation surveillance, and robotic vision. This Special Issue aims to explore recent advances and trends in the use of deep learning and computer vision methods for image restoration/object identification and seeks original contributions that point out possible ways to deal with image data recovery and identification. This includes but is not limited to deep learning techniques, low-level image processing, image restoration, object recolonization/detection, and person/car re-identification.

Dr. Qiang Wang
Dr. Weihong Ren
Dr. Huijie Fan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image restoration
  • object identification
  • person re-identification
  • object detection and tracking
  • autonomous driving
  • scene understanding
  • transfer learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 606 KiB  
Article
Towards Super Compressed Neural Networks for Object Identification: Quantized Low-Rank Tensor Decomposition with Self-Attention
by Baichen Liu, Dongwei Wang, Qi Lv, Zhi Han and Yandong Tang
Electronics 2024, 13(7), 1330; https://doi.org/10.3390/electronics13071330 - 2 Apr 2024
Viewed by 568
Abstract
Deep convolutional neural networks have a large number of parameters and require a significant number of floating-point operations during computation, which limits their deployment in situations where the storage space is limited and computational resources are insufficient, such as in mobile phones and [...] Read more.
Deep convolutional neural networks have a large number of parameters and require a significant number of floating-point operations during computation, which limits their deployment in situations where the storage space is limited and computational resources are insufficient, such as in mobile phones and small robots. Many network compression methods have been proposed to address the aforementioned issues, including pruning, low-rank decomposition, quantization, etc. However, these methods typically fail to achieve a significant compression ratio in terms of the parameter count. Even when high compression rates are achieved, the network’s performance is often significantly deteriorated, making it difficult to perform tasks effectively. In this study, we propose a more compact representation for neural networks, named Quantized Low-Rank Tensor Decomposition (QLTD), to super compress deep convolutional neural networks. Firstly, we employed low-rank Tucker decomposition to compress the pre-trained weights. Subsequently, to further exploit redundancies within the core tensor and factor matrices obtained through Tucker decomposition, we employed vector quantization to partition and cluster the weights. Simultaneously, we introduced a self-attention module for each core tensor and factor matrix to enhance the training responsiveness in critical regions. The object identification results in the CIFAR10 experiment showed that QLTD achieved a compression ratio of 35.43×, with less than 1% loss in accuracy and a compression ratio of 90.61×, with less than a 2% loss in accuracy. QLTD was able to achieve a significant compression ratio in terms of the parameter count and realize a good balance between compressing parameters and maintaining identification accuracy. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

17 pages, 6098 KiB  
Article
MIX-Net: Hybrid Attention/Diversity Network for Person Re-Identification
by Minglang Li, Zhiyong Tao, Sen Lin and Kaihao Feng
Electronics 2024, 13(5), 1001; https://doi.org/10.3390/electronics13051001 - 6 Mar 2024
Viewed by 647
Abstract
Person re-identification (Re-ID) networks are often affected by factors such as pose variations, changes in viewpoint, and occlusion, leading to the extraction of features that encompass a considerable amount of irrelevant information. However, most research has struggled to address the challenge of simultaneously [...] Read more.
Person re-identification (Re-ID) networks are often affected by factors such as pose variations, changes in viewpoint, and occlusion, leading to the extraction of features that encompass a considerable amount of irrelevant information. However, most research has struggled to address the challenge of simultaneously endowing features with both attentive and diversified information. To concurrently extract attentive yet diverse pedestrian features, we amalgamated the strengths of convolutional neural network (CNN) attention and self-attention. By integrating the extracted latent features, we introduced a Hybrid Attention/Diversity Network (MIX-Net), which adeptly captures attentive but diverse information from personal images via a fusion of attention branches and attention suppression branches. Additionally, to extract latent information from secondary important regions to enrich the diversity of features, we designed a novel Discriminative Part Mask (DPM). Experimental results establish the robust competitiveness of our approach, particularly in effectively distinguishing individuals with similar attributes. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 3374 KiB  
Review
A Review: Remote Sensing Image Object Detection Algorithm Based on Deep Learning
by Chenshuai Bai, Xiaofeng Bai and Kaijun Wu
Electronics 2023, 12(24), 4902; https://doi.org/10.3390/electronics12244902 - 6 Dec 2023
Viewed by 2227
Abstract
Target detection in optical remote sensing images using deep-learning technologies has a wide range of applications in urban building detection, road extraction, crop monitoring, and forest fire monitoring, which provides strong support for environmental monitoring, urban planning, and agricultural management. This paper reviews [...] Read more.
Target detection in optical remote sensing images using deep-learning technologies has a wide range of applications in urban building detection, road extraction, crop monitoring, and forest fire monitoring, which provides strong support for environmental monitoring, urban planning, and agricultural management. This paper reviews the research progress of the YOLO series, SSD series, candidate region series, and Transformer algorithm. It summarizes the object detection algorithms based on standard improvement methods such as supervision, attention mechanism, and multi-scale. The performance of different algorithms is also compared and analyzed with the common remote sensing image data sets. Finally, future research challenges, improvement directions, and issues of concern are prospected, which provides valuable ideas for subsequent related research. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

Back to TopTop