sensors-logo

Journal Browser

Journal Browser

Innovative Technologies and Applications in Engineering Sensing Through Deep and Machine Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Industrial Sensors".

Deadline for manuscript submissions: closed (20 March 2026) | Viewed by 24943

Special Issue Editor

Department of Civil Engineering, Zhejiang University, 866 Yuhangtang Road, Hangzhou 310058, China
Interests: flexible sensing technology; railway dynamics

Special Issue Information

Dear Colleagues,

The pervasive integration of deep learning (DL) and machine learning (ML) in engineering sensing technologies marks a significant paradigm shift, providing unparalleled advancements in sensor fabrication, deployment, and calibration. Machine learning algorithms now play a crucial role in optimizing sensor configurations for various applications, ensuring precision and efficiency from the onset of data collection. Deep learning contributes extensively to the development of intelligent calibration techniques, adapting to diverse environmental conditions and counteracting sensor degradation to maintain long-term reliability and accuracy. These advanced computational methods extend their influence to data refinement, offering solutions for enhancing data accuracy, interpolating missing values, and facilitating intelligent data classification and recognition. The predictive modelling capabilities of these techniques introduce possibilities for proactive maintenance and anomaly detection within engineering infrastructures, guaranteeing safety and enhancing the lifespan of vital equipment. Furthermore, these technologies are central to innovating new sensing modalities and methodologies, addressing the growing complexity and evolving demands of contemporary engineering challenges.

The aim of this Special Issue is to highlight and promote recent advancements in algorithm-assisted sensing and its applications within the engineering field, covering aspects from sensor pre-processing, data collection, and analysis to practical engineering applications utilizing machine learning and deep learning. We are interested in a variety of topics, including but not limited to:

  • Enhancing sensing accuracy with ML/DL;
  • ML/DL-supported sensor fabrication, deployment, and calibration;
  • Advanced data classification and prediction through ML/DL;
  • Applying ML/DL-assisted sensing in diverse engineering scenarios;
  • ML/DL algorithm optimization in engineering sensing;
  • Targeted optimization strategies for applying ML/DL in engineering scenarios.

Dr. Haoran Fu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • engineering applications
  • data analysis
  • sensor optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

43 pages, 12675 KB  
Article
Intelligent Water Quality Assessment and Prediction System for Public Networks: A Comparative Analysis of ML Algorithms and Rule-Based Recommender Techniques
by Camelia Paliuc, Paul Banu-Taran, Sebastian-Ioan Petruc, Razvan Bogdan and Mircea Popa
Sensors 2026, 26(4), 1392; https://doi.org/10.3390/s26041392 - 23 Feb 2026
Viewed by 546
Abstract
An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and [...] Read more.
An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and used in predictions of upcoming samples. The system compares 17 water quality parameters to the World Health Organization and public reports of Timișoara drinking water standards for 804 samples. The system provides real-time data storage, drinkability prediction for the reservoir water system, and rule-based critical water recommendations for elementary treatment in samples. The most accurate and best-calibrated against random forest, gradient boosting, and Logistic Regression algorithms was the decision tree algorithm of the ML models. The experimental findings also determine the regions of the worst and best water quality and propose respective treatment. In contrast to previous research and structures, the paper demonstrates an approved stable solution for smart water monitoring, correlating practical deployment with sophisticated data-based conclusions. The results contribute to enhancing public health, enhancing water management measures, and upscaling the system for larger-scale applications. Full article
Show Figures

Figure 1

28 pages, 1641 KB  
Article
SeADL: Self-Adaptive Deep Learning for Real-Time Marine Visibility Forecasting Using Multi-Source Sensor Data
by William Girard, Haiping Xu and Donghui Yan
Sensors 2026, 26(2), 676; https://doi.org/10.3390/s26020676 - 20 Jan 2026
Viewed by 552
Abstract
Accurate prediction of marine visibility is critical for ensuring safe and efficient maritime operations, particularly in dynamic and data-sparse ocean environments. Although visibility reduction is a natural and unavoidable atmospheric phenomenon, improved short-term prediction can substantially enhance navigational safety and operational planning. While [...] Read more.
Accurate prediction of marine visibility is critical for ensuring safe and efficient maritime operations, particularly in dynamic and data-sparse ocean environments. Although visibility reduction is a natural and unavoidable atmospheric phenomenon, improved short-term prediction can substantially enhance navigational safety and operational planning. While deep learning methods have demonstrated strong performance in land-based visibility prediction, their effectiveness in marine environments remains constrained by the lack of fixed observation stations, rapidly changing meteorological conditions, and pronounced spatiotemporal variability. This paper introduces SeADL, a self-adaptive deep learning framework for real-time marine visibility forecasting using multi-source time-series data from onboard sensors and drone-borne atmospheric measurements. SeADL incorporates a continuous online learning mechanism that updates model parameters in real time, enabling robust adaptation to both short-term weather fluctuations and long-term environmental trends. Case studies, including a realistic storm simulation, demonstrate that SeADL achieves high prediction accuracy and maintains robust performance under diverse and extreme conditions. These results highlight the potential of combining self-adaptive deep learning with real-time sensor streams to enhance marine situational awareness and improve operational safety in dynamic ocean environments. Full article
Show Figures

Figure 1

20 pages, 3137 KB  
Article
Development and Implementation of an IoT-Enabled Smart Poultry Slaughtering System Using Dynamic Object Tracking and Recognition
by Hao-Ting Lin and Suhendra
Sensors 2025, 25(16), 5028; https://doi.org/10.3390/s25165028 - 13 Aug 2025
Cited by 1 | Viewed by 2736
Abstract
With growing global attention on animal welfare and food safety, humane and efficient slaughtering methods in the poultry industry are in increasing demand. Traditional manual inspection methods for stunning broilers need significant expertise. Additionally, most studies on electrical stunning focus on white broilers, [...] Read more.
With growing global attention on animal welfare and food safety, humane and efficient slaughtering methods in the poultry industry are in increasing demand. Traditional manual inspection methods for stunning broilers need significant expertise. Additionally, most studies on electrical stunning focus on white broilers, whose optimal stunning conditions are not suitable for red-feathered Taiwan chickens. This study aimed to implement a smart, safe, and humane slaughtering system designed to enhance animal welfare and integrate an IoT-enabled vision system into slaughter operations for red-feathered Taiwan chickens. The system enables real-time monitoring and smart management of the poultry stunning process using image technologies for dynamic object tracking recognition. Focusing on red-feathered Taiwan chickens, the system applies dynamic tracking objects with chicken morphology feature extraction based on the YOLO-v4 model to accurately identify stunned and unstunned chickens, ensuring compliance with animal welfare principles and improving the overall efficiency and hygiene of poultry processing. In this study, the dynamic tracking object recognition system comprises object morphology feature detection and motion prediction for red-feathered Taiwan chickens during the slaughtering process. Images are firsthand data from the slaughterhouse. To enhance model performance, image amplification techniques are integrated into the model training process. In parallel, the system architecture integrates IoT-enabled modules to support real-time monitoring, sensor-based classification, and cloud-compatible decisions based on collections of visual data. Prior to image amplification, the YOLO-v4 model achieved an average precision (AP) of 83% for identifying unstunned chickens and 96% for identifying stunned chickens. After image amplification, AP improved significantly to 89% and 99%, respectively. The model achieved and deployed a mean average precision (mAP) of 94% at an IoU threshold of 0.75 and processed images at 39 frames per second, demonstrating its suitability for IoT-enabled real-time dynamic tracking object recognition in a real slaughterhouse environment. Furthermore, the YOLO-v4 model for poultry slaughtering recognition in transient stability, as measured by training loss and validation loss, outperforms the YOLO-X model in this study. Overall, this smart slaughtering system represents a practical and scalable application of AI in the poultry industry. Full article
Show Figures

Figure 1

17 pages, 3768 KB  
Article
A Novel Multistep Wavelet Convolutional Transfer Diagnostic Framework for Cross-Machine Bearing Fault Diagnosis
by Lujia Zhao, Yuling He, Hai Zheng and Derui Dai
Sensors 2025, 25(10), 3141; https://doi.org/10.3390/s25103141 - 15 May 2025
Viewed by 1407
Abstract
Transfer learning has emerged as a potent technique for diagnosing bearing faults in environments with fluctuating operational parameters. Nevertheless, the majority of current transfer-learning-based fault diagnosis approaches focus primarily on adapting to varying conditions within the same machine. In real-world applications, there is [...] Read more.
Transfer learning has emerged as a potent technique for diagnosing bearing faults in environments with fluctuating operational parameters. Nevertheless, the majority of current transfer-learning-based fault diagnosis approaches focus primarily on adapting to varying conditions within the same machine. In real-world applications, there is a frequent need to extend these diagnostic techniques to machines that differ significantly in both function and structural design. Due to the different mechanical structures of different machines, the signal transmission paths are vastly different, and the distribution of collected data varies greatly, making it difficult for existing transfer fault diagnosis methods to meet diagnostic needs. Therefore, a multistep wavelet convolutional transfer diagnostic framework (MSWCTD) is proposed to realize cross-machine bearing fault diagnosis. Firstly, a multistep time shift wavelet convolutional network (MTSWCN) based on the multiscale technique and wavelet transform is proposed to explore the diversity information regarding original vibration data and enhance the feature expression ability. Secondly, a confusion transfer method based on multi-view learning is designed to extract diagnosis knowledge that is transferable, which reduces the discrepancy between machines. Three bearing datasets are utilized to evaluate the MSWCTD, with the MSWCTD showing excellent performance on cross-machine bearing fault diagnosis task. Full article
Show Figures

Figure 1

19 pages, 1385 KB  
Article
Optimizing Sensor Placement for Event Detection: A Case Study in Gaseous Chemical Detection
by Priscile Fogou Suawa and Christian Herglotz
Sensors 2025, 25(8), 2397; https://doi.org/10.3390/s25082397 - 10 Apr 2025
Cited by 1 | Viewed by 2710
Abstract
In dynamic industrial environments, strategic sensor placement is key to accurately monitoring equipment and detecting critical events. Despite progress in Industry 4.0 and the Internet of Things, research on optimal sensor placement remains limited. This study addresses this gap by analyzing how sensor [...] Read more.
In dynamic industrial environments, strategic sensor placement is key to accurately monitoring equipment and detecting critical events. Despite progress in Industry 4.0 and the Internet of Things, research on optimal sensor placement remains limited. This study addresses this gap by analyzing how sensor placement impacts event detection, using chemical detection as a case study with an open dataset. Detecting gases is challenging due to their dispersion. Effective algorithms and well-planned sensor locations are required for reliable results. Using deep convolutional neural networks (DCNNs) and decision tree (DT) methods, we implemented and tested detection models on a public dataset of chemical substances collected at five locations. In addition, we also implemented a multi-objective optimization approach based on the non-dominated sorting genetic algorithm II (NSGA-II) to identify optimal sensor configurations that balance high detection accuracy with cost efficiency in sensor deployment. Using the refined sensor placement, the DCNN model achieved 100% accuracy using only 30% of the available sensors. Full article
Show Figures

Figure 1

18 pages, 5510 KB  
Article
A 3D Point Cloud Classification Method Based on Adaptive Graph Convolution and Global Attention
by Yaowei Yue, Xiaonan Li and Yun Peng
Sensors 2024, 24(2), 617; https://doi.org/10.3390/s24020617 - 18 Jan 2024
Cited by 10 | Viewed by 4280
Abstract
In recent years, there has been significant growth in the ubiquity and popularity of three-dimensional (3D) point clouds, with an increasing focus on the classification of 3D point clouds. To extract richer features from point clouds, many researchers have turned their attention to [...] Read more.
In recent years, there has been significant growth in the ubiquity and popularity of three-dimensional (3D) point clouds, with an increasing focus on the classification of 3D point clouds. To extract richer features from point clouds, many researchers have turned their attention to various point set regions and channels within irregular point clouds. However, this approach has limited capability in attending to crucial regions of interest in 3D point clouds and may overlook valuable information from neighboring features during feature aggregation. Therefore, this paper proposes a novel 3D point cloud classification method based on global attention and adaptive graph convolution (Att-AdaptNet). The method consists of two main branches: the first branch computes attention masks for each point, while the second branch employs adaptive graph convolution to extract global features from the point set. It dynamically learns features based on point interactions, generating adaptive kernels to effectively and precisely capture diverse relationships among points from different semantic parts. Experimental results demonstrate that the proposed model achieves 93.8% in overall accuracy and 90.8% in average accuracy on the ModeNet40 dataset. Full article
Show Figures

Figure 1

Review

Jump to: Research

48 pages, 556 KB  
Review
Machine Learning-Based Security Solutions for IoT Networks: A Comprehensive Survey
by Abdullah Alfahaid, Easa Alalwany, Abdulqader M. Almars, Fatemah Alharbi, Elsayed Atlam and Imad Mahgoub
Sensors 2025, 25(11), 3341; https://doi.org/10.3390/s25113341 - 26 May 2025
Cited by 32 | Viewed by 11408
Abstract
The Internet of Things (IoT) is revolutionizing industries by enabling seamless interconnectivity across domains such as healthcare, smart cities, the Industrial Internet of Things (IIoT), and the Internet of Vehicles (IoV). However, IoT security remains a significant challenge due to vulnerabilities related to [...] Read more.
The Internet of Things (IoT) is revolutionizing industries by enabling seamless interconnectivity across domains such as healthcare, smart cities, the Industrial Internet of Things (IIoT), and the Internet of Vehicles (IoV). However, IoT security remains a significant challenge due to vulnerabilities related to data breaches, privacy concerns, cyber threats, and trust management issues. Addressing these risks requires advanced security mechanisms, with machine learning (ML) emerging as a powerful tool for anomaly detection, intrusion detection, and threat mitigation. This survey provides a comprehensive review of ML-driven IoT security solutions from 2020 to 2024, examining the effectiveness of supervised, unsupervised, and reinforcement learning approaches, as well as advanced techniques such as deep learning (DL), ensemble learning (EL), federated learning (FL), and transfer learning (TL). A systematic classification of ML techniques is presented based on their IoT security applications, along with a taxonomy of security threats and a critical evaluation of existing solutions in terms of scalability, computational efficiency, and privacy preservation. Additionally, this study identifies key limitations of current ML approaches, including high computational costs, adversarial vulnerabilities, and interpretability challenges, while outlining future research opportunities such as privacy-preserving ML, explainable AI, and edge-based security frameworks. By synthesizing insights from recent advancements, this paper provides a structured framework for developing robust, intelligent, and adaptive IoT security solutions. The findings aim to guide researchers and practitioners in designing next-generation cybersecurity models capable of effectively countering emerging threats in IoT ecosystems. Full article
Show Figures

Figure 1

Back to TopTop