Next Article in Journal
Guest Editorial on 10th Anniversary of Technologies—Recent Advances and Perspectives
Previous Article in Journal
Enhancing Solar Plant Efficiency: A Review of Vision-Based Monitoring and Fault Detection Techniques
Previous Article in Special Issue
Exploiting PlanetScope Imagery for Volcanic Deposits Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Guest Editorial on Image and Signal Processing

1
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
2
School of Computing and Information Science, Anglia Ruskin University, Cambridge CB1 1PT, UK
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(10), 176; https://doi.org/10.3390/technologies12100176
Submission received: 19 September 2024 / Accepted: 24 September 2024 / Published: 26 September 2024
(This article belongs to the Special Issue Image and Signal Processing)

1. Introduction

In recent years, we have witnessed significant advancements in technologies that integrate artificial intelligence into image and signal processing, along with their wide-ranging applications. Since its breakthrough in the 2012 ImageNet challenge, deep learning has profoundly influenced modern life more than any other technology. This progress is largely driven by deep learning techniques powered by artificial intelligence, including fuzzy and rough systems, neural networks, and evolutionary algorithms. Deep learning typically requires vast amounts of data to be properly trained for real-world applications. To address this challenge, researchers are increasingly focusing on developing lightweight and explainable deep learning models.
This Special Issue invited authors to submit novel and cutting-edge research on deep learning applications in image and signal processing. Topics of interest include, but were not limited to, new deep learning algorithms, deep learning for data mining, computer vision, forecasting, natural language processing, clustering, image filtering, restoration and enhancement, image and video segmentation, tracking, feature extraction and analysis, motion detection and estimation, pattern recognition, and content-based image retrieval. Additionally, we welcomed contributions on the application of deep learning in domains such as robotics, industrial automation, autonomous systems, and gaming. A total of 14 papers were published in this Special Issue.

2. Overview of Contributions

Tephra fallout during explosive eruptions poses significant risks to air traffic, infrastructure, and human health. The contribution by Dozzo et al., “Exploiting PlanetScope Imagery for Volcanic Deposits Mapping”, introduces a new technique to map tephra-covered areas using PlanetScope satellite imagery [item 1 in Appendix A]. By analyzing pre- and post-eruption reflectance values in visible (RGB) and near-infrared (NIR) bands, the authors developed a “Tephra Fallout Index (TFI)” to identify affected regions. Using the Google Earth Engine, they established TFI thresholds to quantify tephra surface coverage for different eruptions. The method is applied to the 2021 Mt. Etna eruptions, which impacted the volcano’s eastern flank multiple times in rapid succession, complicating field surveys. Comparisons with available field data show strong alignment with satellite-derived results. This technique offers valuable potential for real-time volcanic hazard assessment and broader applications in mapping other hazardous events, providing a fast and effective tool for disaster management and risk reduction.
The contribution by Hitimana et al., “An Intelligent System-Based Coffee Plant Leaf Disease Recognition Using Deep Learning Techniques on Rwandan Arabica Dataset”, focuses on developing an efficient method for detecting and identifying coffee leaf diseases in Rwanda, a country where coffee is a critical agricultural commodity [item 2 in Appendix A]. Farmers currently rely on manual disease detection, which is prone to errors. With advancements in deep learning, automated detection offers a promising solution to improve crop yields. A dataset of 37,939 coffee leaf images was collected, targeting diseases such as coffee rust, mines, and red spider mites. Five deep learning models—InceptionV3, ResNet50, Xception, VGG16, and DenseNet—were trained, validated, and tested with an 80/10/10 split over 10 epochs. The DenseNet model achieved the best performance with 99.57% accuracy. The proposed method proved efficient, outperforming existing approaches and offering potential for future portable applications in coffee leaf disease detection.
The contribution by Yao et al., “A Foreign Object Detection Method for Belt Conveyors Based on an Improved YOLOX Model”, focuses on developing a foreign object detection method for belt conveyors used in coal transportation, a critical component in intelligent mining [item 3 in Appendix A]. In complex production environments, non-coal foreign objects frequently come into contact with belts, risking safety issues like scratches, deviation, and breakage. To address this, the authors establish a foreign object image dataset and enhance it using an IAT image enhancement module and CBAM attention mechanism. A rotating decoupling head and MO-YOLOX network structure are introduced to predict the angle of foreign objects with large aspect ratios. Experiments conducted in a mining lab show that the proposed method achieves 93.87% accuracy, 93.69% recall, and 93.68% mAP50, with an average inference time of 25 ms. These results highlight the system’s efficiency in foreign object detection, contributing to safer coal transportation operations.
The contribution by Yoo et al., “A Novel Approach to Quantitative Characterization and Visualization of Color Fading”, focuses on quantifying and analyzing color fading and darkening over time, triggered by light exposure [item 4 in Appendix A]. The researchers used the newly developed PicMan software to compare and map pixel-by-pixel color differences in digital images. Japanese wood-block prints, both with and without color fading, were selected for analysis. The study demonstrated that pixel-by-pixel, line-by-line, and area-by-area comparisons effectively quantified color changes, presenting results in RGB, HSV, and CIE L * a * b * values. The results were displayed in numerical, graphical, and image formats, each with distinct advantages for communication and analysis. Additionally, color change simulations for past and future moments were demonstrated using interpolation and extrapolation methods. This work offers valuable insights for practical applications in art conservation, museum displays, and cultural heritage preservation, assisting decision-making in storage, restoration, and public display planning.
The contribution by Wiseman, “Adapting the H.264 Standard to the Internet of Vehicles”, proposes a two-step method to reduce data transmission on Internet of Vehicle networks [item 5 in Appendix A]. The first step reduces image color resolution from full color to just eight colors, which, while noticeable, is sufficient for typical vehicle applications. The second step modifies the quantization tables used by H.264 compression to better suit eight-color images. The first step alone reduces image size by over 30%, and combining both steps results in a size reduction of more than 40%. Together, these steps significantly decrease the amount of data transferred on vehicular networks, optimizing network efficiency without compromising the functionality of common vehicle applications.
The contribution by Svendsen and Kadry, “Comparative Analysis of Image Classification Models for Norwegian Sign Language Recognition”, addresses the communication challenges faced by the Deaf population by exploring image classification models for sign language recognition [item 6 in Appendix A]. Focusing on Norwegian Sign Language (NSL), a relatively under-researched area, the authors created a new dataset with 24,300 images of 27 NSL alphabet signs. A comparative analysis of machine learning models, including a Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and a Convolutional Neural Network (CNN), was conducted to identify the most effective approach for sign language recognition. The SVM and CNN outperformed other models, achieving 99.9% accuracy with high computational efficiency. This research contributes significantly to NSL recognition, offering a foundation for future studies and the development of assistive communication systems for the Deaf community.
The contribution by Duarte-Correa et al., “Identifying Growth Patterns in Arid-Zone Onion Crops (Allium Cepa) Using Digital Image Processing”, focuses on improving onion crop performance by addressing challenges encountered during its phenological cycle [item 7 in Appendix A]. Utilizing unmanned aerial vehicles (UAVs) and digital image processing, the research monitored key factors such as humidity, weed growth, vegetation deficits, and reduced harvest performance. An algorithm was developed to identify patterns that most significantly affected crop growth. Despite an expected local yield of 40.166 tons/ha, only 25.00 tons/ha was achieved due to blight caused by constant humidity and limited sunlight. This resulted in poor leaf health, underdeveloped bulbs, and 50% of the crop being medium-sized. Additionally, approximately 20% of the total production was lost due to blight and unfavorable weather conditions. The study underscores the importance of technical solutions to enhance sustainable farming and crop management.
The contribution by Yoo et al., “Image-Based Quantification of Color and Its Machine Vision and Offline Applications”, explores image-based colorimetry, leveraging the accessibility of smartphones with image sensors and increasing computational capabilities [item 8 in Appendix A]. Its low cost, portability, and compatibility with data processing make it suitable for a range of interdisciplinary applications, including art, fashion, food science, medicine, agriculture, geology, and more. The work focuses on the image-based quantification of color using specially developed software, demonstrating color extraction from a single pixel to customized regions of interest (ROIs) in digital images. Various color models like RGB, HSV, CIELAB, and Munsell are used to quantify colors from images of dyed T-shirts, tongues, and assays. The study also demonstrates histograms and statistical analyses of colors, proposing this method as a reliable and objective tool for color-based diagnostics and decision-making across diverse fields. The validity is verified through multiple examples in practical applications.
In the contribution by Shi et al., “Mobilenetv2_CA Lightweight Object Detection Network in Autonomous Driving”, a lightweight object detection algorithm, based on MobileNetv2_CA, was proposed to address issues of high complexity, excessive parameters, and missed small targets in autonomous driving [item 9 in Appendix A]. Mosaic image enhancement is applied in pre-processing to improve feature extraction for small and complex targets. The Coordinate Attention (CA) mechanism is embedded into the MobileNetv2 backbone, alongside PANet and Yolo detection heads, enabling multi-scale feature fusion. This results in a lightweight object detection network. Tests show that the network achieved an 81.43% average detection accuracy on the VOC2007 + 2012 dataset, and 85.07% accuracy with a 31.84 FPS speed on the KITTI dataset. The network’s parameter count is just 39.5 M, making it suitable for autonomous driving applications.
The contribution by Lemenkova and Debeir, “GDAL and PROJ Libraries Integrated with GRASS GIS for Terrain Modelling of the Georeferenced Raster Image”, addresses limitations in traditional Geographic Information System (GIS) approaches by applying scripting methods and libraries such as Geospatial Data Abstraction Library (GDAL), PROJ, and GRASS GIS for geospatial data processing and morphometric analysis [item 10 in Appendix A]. The workflow involves converting Earth Global Relief Model (ETOPO1) data using GDAL, transforming it into various cartographic projections with PROJ, and analyzing topographic data with GRASS GIS modules. The study reveals patterns in topographic data, including elevation and depth distributions, and demonstrates the effectiveness of scripting techniques for topographic modeling and raster data processing. The integration of GDAL, PROJ, and GRASS GIS provides a more efficient and flexible approach to cartographic analysis, enhancing the informativeness and spatial data processing capabilities of traditional GIS software.
The contribution by Tychola et al., “Identifying Historic Buildings over Time through Image Matching”, focuses on the identification of historic buildings over time using feature correspondence techniques [item 11 in Appendix A]. Photographs of landmarks in Drama, Greece, taken under varying conditions (e.g., lighting, weather, rotation, scale), were analyzed using traditional feature detection and description algorithms such as SIFT, ORB, and BRISK. These algorithms help identify homologous points in images to study changes in buildings over time. The research evaluates the performance of these algorithms in terms of accuracy, efficiency, and robustness, with SIFT and BRISK being the most accurate, while ORB and BRISK are the most efficient. The study highlights the role of computer vision in preserving historical architecture through accurate and efficient image matching techniques.
The contribution by Yoo et al., “Development of Static and Dynamic Colorimetric Analysis Techniques Using Image Sensors and Novel Image Processing Software for Chemical, Biological and Medical Applications”, presents the development of colorimetric sensing techniques using image sensors and novel image processing software for chemical, biological, and medical applications [item 12 in Appendix A]. The system enables real-time monitoring and recording of colorimetric data from a point (s), line (s), and area (s) of interest in images and videos, supporting manual and automatic data collection. These techniques can be applied in process control, optimization, and machine learning. Video clips of chromatographic experiments with colored inks and blinking LED lights were analyzed, extracting colorimetric data as a function of time. The results were visualized through time-lapse images and RGB intensity graphs. The analysis was demonstrated using RGB, HSV, and CIE L*a*b* values for both static and dynamic colorimetric information, showcasing the effectiveness of the novel image processing software for color-based analysis.
The contribution by Kim and Koo, “Embedded System Performance Analysis for Implementing a Portable Drowsiness Detection System for Drivers”, proposes an efficient platform for running Ghoddoosian’s drowsiness detection algorithm, which utilizes temporal blinking patterns [item 13 in Appendix A]. Unlike previous implementations on powerful desktop computers, the study tested embedded systems suitable for vehicles. After comparing the Jetson Nano and Beelink (Mini PC), the Beelink was determined to be more efficient, with a processing time of 22.73 ms compared to the Jetson Nano’s 94.27 ms. The Beelink’s portability and power efficiency made it ideal for in-vehicle use. Additionally, a threshold optimization algorithm was developed to balance sensitivity and specificity in detecting drowsiness. This research advances drowsiness detection by identifying a real-time, practical platform for implementation in vehicles, bridging the gap between theoretical algorithms and practical application in road safety.
The contribution by Ramírez-Arias et al., “Evaluation of Machine Learning Algorithms for Classification of EEG Signals”, focuses on improving the classification of motor movements using brain–computer interfaces (BCIs) by analyzing electroencephalographic (EEG) signals and training machine learning (ML) algorithms [item 14 in Appendix A]. EEG signals from 30 Physionet subjects, related to left hand, right hand, fist, foot, and relaxation movements, were processed using electrodes C3, C1, CZ, C2, and C4. Feature extraction techniques were applied, and nine ML algorithms were trained and tested. LabVIEW™ 2015 was used for signal processing, and MATLAB 2021a for algorithm training and evaluation. Among the algorithms, Medium-ANN achieved the highest performance with an AUC of 0.9998, Cohen’s Kappa of 0.9552, and a loss of 0.0147. This approach is promising for applications like robotic prostheses, especially where resources are limited, such as embedded systems or edge computing devices.

3. Conclusions

This Special Issue presents 14 groundbreaking research findings on advanced image and signal processing. The insights shared herein are anticipated to foster further advancements and research in the domain of image and signal processing in the future.

Funding

This research received no external funding.

Acknowledgments

I thank the authors who published their research results in this Special Issue and the reviewers who reviewed their papers. I also thank the editors for their hard work and perseverance in making this Special Issue a success.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

  • Dozzo, M.; Ganci, G.; Lucchi, F.; Scollo, S. Exploiting PlanetScope Imagery for Volcanic Deposits Mapping. Technologies 2024, 12, 25. https://doi.org/10.3390/technologies12020025.
  • Hitimana, E.; Sinayobye, O.J.; Ufitinema, J.C.; Mukamugema, J.; Rwibasira, P.; Murangira, T.; Masabo, E.; Chepkwony, L.C.; Kamikazi, M.C.A.; Uwera, J.A.U.; et al. An Intelligent System-Based Coffee Plant Leaf Disease Recognition Using Deep Learning Techniques on Rwandan Arabica Dataset. Technologies 2023, 11, 116. https://doi.org/10.3390/technologies11050116.
  • Yao, R.; Qi, P.; Hua, D.; Zhang, X.; Lu, H.; Liu, X. A Foreign Object Detection Method for Belt Conveyors Based on an Im-proved YOLOX Model. Technologies 2023, 11, 114. https://doi.org/10.3390/technologies11050114.
  • Yoo, W.S.; Kang, K.; Kim, J.G.; Yoo, Y. A Novel Approach to Quantitative Characterization and Visualization of Color Fad-ing. Technologies 2023, 11, 108. https://doi.org/10.3390/technologies11040108.
  • Wiseman, Y. Adapting the H.264 Standard to the Internet of Vehicles. Technologies 2023, 11, 103. https://doi.org/10.3390/technologies11040103.
  • Svendsen, B.; Kadry, S. Comparative Analysis of Image Classification Models for Norwegian Sign Language Recognition. Technologies 2023, 11, 99. https://doi.org/10.3390/technologies11040099.
  • Duarte-Correa, D.; Rodríguez-Reséndiz, J.; Díaz-Flórez, G.; Olvera-Olvera, C.A.; Álvarez-Alvarado, J.M. Identifying Growth Patterns in Arid-Zone Onion Crops (Allium Cepa) Using Digital Image Processing. Technologies 2023, 11, 67. https://doi.org/10.3390/technologies11030067.
  • Yoo, W.S.; Kang, K.; Kim, J.G.; Yoo, Y. Image-Based Quantification of Color and Its Machine Vision and Offline Applications. Technologies 2023, 11, 49. https://doi.org/10.3390/technologies11020049.
  • Shi, P.; Li, L.; Qi, H.; Yang, A. Mobilenetv2_CA Lightweight Object Detection Network in Autonomous Driving. Technolo-gies 2023, 11, 47. https://doi.org/10.3390/technologies11020047.
  • Lemenkova, P.; Debeir, O. GDAL and PROJ Libraries Integrated with GRASS GIS for Terrain Modelling of the Georeferenced Raster Image. Technologies 2023, 11, 46. https://doi.org/10.3390/technologies11020046.
  • Tychola, K.A.; Chatzistamatis, S.; Vrochidou, E.; Tsekouras, G.E.; Papakostas, G.A. Identifying Historic Buildings over Time through Image Matching. Technologies 2023, 11, 32. https://doi.org/10.3390/technologies11010032.
  • Yoo, W.S.; Kim, J.G.; Kang, K.; Yoo, Y. Development of Static and Dynamic Colorimetric Analysis Techniques Using Image Sensors and Novel Image Processing Software for Chemical, Biological and Medical Applications. Technologies 2023, 11, 23. https://doi.org/10.3390/technologies11010023.
  • Kim, M.; Koo, J. Embedded System Performance Analysis for Implementing a Portable Drowsiness Detection System for Drivers. Technologies 2023, 11, 8. https://doi.org/10.3390/technologies11010008.
  • Ramírez-Arias, F.J.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; Colores-Vargas, J.M.; García-Canseco, E.; López-Bonilla, O.R.; Galindo-Aldana, G.M.; Inzunza-González, E. Evaluation of Machine Learning Algorithms for Classification of EEG Signals. Technologies 2022, 10, 79. https://doi.org/10.3390/technologies10040079.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jeon, G.; Ahmed, I. Guest Editorial on Image and Signal Processing. Technologies 2024, 12, 176. https://doi.org/10.3390/technologies12100176

AMA Style

Jeon G, Ahmed I. Guest Editorial on Image and Signal Processing. Technologies. 2024; 12(10):176. https://doi.org/10.3390/technologies12100176

Chicago/Turabian Style

Jeon, Gwanggil, and Imran Ahmed. 2024. "Guest Editorial on Image and Signal Processing" Technologies 12, no. 10: 176. https://doi.org/10.3390/technologies12100176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop