Applications and Challenges of Image Processing in Smart Environment

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electronic Multimedia".

Deadline for manuscript submissions: 15 May 2025 | Viewed by 1830

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Interests: deep learning; image processing; power electronics applications

Special Issue Information

Dear Colleagues,

The purpose of this Special Issue is to explore the latest advancements, applications, and challenges in the field of image processing in the context of smart environments. With the rapid development of digital technology, image processing has become an essential component in various smart systems and applications, revolutionizing industries such as healthcare, transportation, surveillance, and automation.

This Special Issue aims to gather original research papers, reviews, and case studies that address the diverse applications of image processing techniques in smart environments. Topics of interest include, but are not limited to, the following areas:

  1. Smart healthcare systems: the use of image processing in medical imaging, disease diagnosis, remote patient monitoring, and personalized treatment;
  2. Intelligent transportation systems: the application of image processing for traffic monitoring, vehicle detection and classification, object tracking, and driver assistance systems;
  3. Smart surveillance: techniques used in image and video analysis in surveillance applications, including object detection, tracking, behavior recognition, and anomaly detection;
  4. Automation and robotics: image processing for object recognition, localization, manipulation, and navigation in autonomous robots and industrial automation;
  5. Smart home and Internet of Things (IoT): image processing integration in smart home devices and systems, enabling functions such as facial recognition, activity monitoring, and security systems;
  6. Augmented reality and virtual reality: image processing techniques for enhancing the visual experience in AR/VR applications, including object recognition, scene reconstruction, and motion tracking;
  7. Multimedia forensics in smart environments: image processing techniques on information security in smart environments, including information hiding, cameral model identification, manipulation detection, and location;
  8. Deep learning technologies in smart environments: light model development, model prune, and model deployment technologies for deep learning-based approaches in mobile devices with restricted hardware resources.

This Special Issue will provide a platform for researchers and practitioners to share their innovative work, discuss challenges, and propose future directions in the field of image processing in smart environments. It is expected to contribute to the development of intelligent systems that can perceive and interpret visual information to improve decision-making processes and enhance user experiences in various domains.

Dr. Xinshan Zhu
Dr. Bin Pan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7973 KiB  
Article
Research on Target Hybrid Recognition and Localization Methods Based on an Industrial Camera and a Depth Camera in Complex Scenes
by Mingxin Yuan, Jie Li, Borui Cao, Shihao Bao, Li Sun and Xiangbin Li
Electronics 2024, 13(22), 4381; https://doi.org/10.3390/electronics13224381 - 8 Nov 2024
Viewed by 23
Abstract
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements [...] Read more.
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements of target recognition and localization, YOLOv5s is introduced as the basic algorithm model for target hybrid recognition and localization. Then, in order to improve the accuracy of target recognition and coarse localization based on an industrial camera (eye-to-hand), the AFPN feature fusion module, simple and parameter-free attention module (SimAM), and soft non-maximum suppression (Soft NMS) are introduced. In order to improve the accuracy of target recognition and fine localization based on a depth camera (eye-in-hand), the SENetV2 backbone network structure, dynamic head module, deformable attention mechanism, and chain-of-thought prompted adaptive enhancer network are introduced. After that, on the basis of constructing a dual camera platform for target hybrid recognition and localization, the hand–eye calibration, collection and production of image datasets required for model training are completed. Finally, for the docking of the oil filling port, the hybrid recognition and localization experimental tests are completed in sequence. The test results show that in target recognition and coarse localization based on the industrial camera, the recognition accuracy of the designed model reaches 99%, and the average localization errors in the horizontal and vertical directions are 2.22 mm and 3.66 mm, respectively. In target recognition and fine localization based on the depth camera, the recognition accuracy of the designed model reaches 98%, and the average errors in depth, horizontal, and vertical directions are 0.12 mm, 0.28 mm, and 0.16 mm, respectively. These not only verify the effectiveness of the target hybrid recognition and localization methods based on dual cameras, but also demonstrate that they meet the high-precision recognition and localization requirements in complex scenes. Full article
(This article belongs to the Special Issue Applications and Challenges of Image Processing in Smart Environment)
Show Figures

Figure 1

37 pages, 5927 KiB  
Article
Object and Pedestrian Detection on Road in Foggy Weather Conditions by Hyperparameterized YOLOv8 Model
by Ahmad Esmaeil Abbasi, Agostino Marcello Mangini and Maria Pia Fanti
Electronics 2024, 13(18), 3661; https://doi.org/10.3390/electronics13183661 - 14 Sep 2024
Viewed by 1156
Abstract
Connected cooperative and automated (CAM) vehicles and self-driving cars need to achieve robust and accurate environment understanding. With this aim, they are usually equipped with sensors and adopt multiple sensing strategies, also fused among them to exploit their complementary properties. In recent years, [...] Read more.
Connected cooperative and automated (CAM) vehicles and self-driving cars need to achieve robust and accurate environment understanding. With this aim, they are usually equipped with sensors and adopt multiple sensing strategies, also fused among them to exploit their complementary properties. In recent years, artificial intelligence such as machine learning- and deep learning-based approaches have been applied for object and pedestrian detection and prediction reliability quantification. This paper proposes a procedure based on the YOLOv8 (You Only Look Once) method to discover objects on the roads such as cars, traffic lights, pedestrians and street signs in foggy weather conditions. In particular, YOLOv8 is a recent release of YOLO, a popular neural network model used for object detection and image classification. The obtained model is applied to a dataset including about 4000 foggy road images and the object detection accuracy is improved by changing hyperparameters such as epochs, batch size and augmentation methods. To achieve good accuracy and few errors in detecting objects in the images, the hyperparameters are optimized by four different methods, and different metrics are considered, namely accuracy factor, precision, recall, precision–recall and loss. Full article
(This article belongs to the Special Issue Applications and Challenges of Image Processing in Smart Environment)
Show Figures

Figure 1

Back to TopTop