Journal Description
Sensors
Sensors
is an international, peer-reviewed, open access journal on the science and technology of sensors. Sensors is published semimonthly online by MDPI. The Polish Society of Applied Electromagnetics (PTZE), Japan Society of Photogrammetry and Remote Sensing (JSPRS), Spanish Society of Biomedical Engineering (SEIB) and International Society for the Measurement of Physical Behaviour (ISMPB) are affiliated with Sensors and their members receive a discount on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, MEDLINE, PMC, Ei Compendex, Inspec, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Instruments & Instrumentation) / CiteScore - Q1 (Instrumentation)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 2.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Sensors.
- Companion journals for Sensors include: Chips, Automation, JCP and Targets.
Impact Factor:
3.9 (2022);
5-Year Impact Factor:
4.1 (2022)
Latest Articles
Embedded Sensors with 3D Printing Technology: Review
Sensors 2024, 24(6), 1955; https://doi.org/10.3390/s24061955 - 19 Mar 2024
Abstract
Embedded sensors (ESs) are used in smart materials to enable continuous and permanent measurements of their structural integrity, while sensing technology involves developing sensors, sensory systems, or smart materials that monitor a wide range of properties of materials. Incorporating 3D-printed sensors into hosting
[...] Read more.
Embedded sensors (ESs) are used in smart materials to enable continuous and permanent measurements of their structural integrity, while sensing technology involves developing sensors, sensory systems, or smart materials that monitor a wide range of properties of materials. Incorporating 3D-printed sensors into hosting structures has grown in popularity because of improved assembly processes, reduced system complexity, and lower fabrication costs. 3D-printed sensors can be embedded into structures and attached to surfaces through two methods: attaching to surfaces or embedding in 3D-printed sensors. We discussed various additive manufacturing techniques for fabricating sensors in this review. We also discussed the many strategies for manufacturing sensors using additive manufacturing, as well as how sensors are integrated into the manufacturing process. The review also explained the fundamental mechanisms used in sensors and their applications. The study demonstrated that embedded 3D printing sensors facilitate the development of additive sensor materials for smart goods and the Internet of Things.
Full article
(This article belongs to the Special Issue Structural Health Monitoring (SHM) and Nondestructive Evaluation (NDE) for Infrastructure and Manufacturing)
►
Show Figures
Open AccessReview
Machine Learning for the Design and the Simulation of Radiofrequency Magnetic Resonance Coils: Literature Review, Challenges, and Perspectives
by
Giulio Giovannetti, Nunzia Fontana, Alessandra Flori, Maria Filomena Santarelli, Mauro Tucci, Vincenzo Positano, Sami Barmada and Francesca Frijia
Sensors 2024, 24(6), 1954; https://doi.org/10.3390/s24061954 - 19 Mar 2024
Abstract
Radiofrequency (RF) coils for magnetic resonance imaging (MRI) applications serve to generate RF fields to excite the nuclei in the sample (transmit coil) and to pick up the RF signals emitted by the nuclei (receive coil). For the purpose of optimizing the image
[...] Read more.
Radiofrequency (RF) coils for magnetic resonance imaging (MRI) applications serve to generate RF fields to excite the nuclei in the sample (transmit coil) and to pick up the RF signals emitted by the nuclei (receive coil). For the purpose of optimizing the image quality, the performance of RF coils has to be maximized. In particular, the transmit coil has to provide a homogeneous RF magnetic field, while the receive coil has to provide the highest signal-to-noise ratio (SNR). Thus, particular attention must be paid to the coil simulation and design phases, which can be performed with different computer simulation techniques. Being largely used in many sectors of engineering and sciences, machine learning (ML) is a promising method among the different emerging strategies for coil simulation and design. Starting from the applications of ML algorithms in MRI and a short description of the RF coil’s performance parameters, this narrative review describes the applications of such techniques for the simulation and design of RF coils for MRI, by including deep learning (DL) and ML-based algorithms for solving electromagnetic problems.
Full article
(This article belongs to the Section Sensing and Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessment of Noise of MEMS IMU Sensors of Different Grades for GNSS/IMU Navigation
by
Vladimir Suvorkin, Miquel Garcia-Fernandez, Guillermo González-Casado, Mowen Li and Adria Rovira-Garcia
Sensors 2024, 24(6), 1953; https://doi.org/10.3390/s24061953 - 19 Mar 2024
Abstract
Inertial measurement units (IMUs) are key components of various applications including navigation, robotics, aerospace, and automotive systems. IMU sensor characteristics have a significant impact on the accuracy and reliability of these applications. In particular, noise characteristics and bias stability are critical for proper
[...] Read more.
Inertial measurement units (IMUs) are key components of various applications including navigation, robotics, aerospace, and automotive systems. IMU sensor characteristics have a significant impact on the accuracy and reliability of these applications. In particular, noise characteristics and bias stability are critical for proper filter settings to perform a combined GNSS/IMU solution. This paper presents an analysis based on the Allan deviation of different IMU sensors that correspond to different grades of micro-electromechanical systems (MEMS)-type IMUs in order to evaluate their accuracy and stability over time. The study covers three IMU sensors of different grades (ascending order): Rokubun Argonaut navigator sensor (InvenSense TDK MPU9250), Samsung Galaxy Note10 phone sensor (STMicroelectronics LSM6DSR), and NovAtel PwrPak7 sensor (Epson EG320N). The noise components of the sensors are computed using overlapped Allan deviation analysis on data collected over the course of a week in a static position. The focus of the analysis is to characterize the random walk noise and bias stability, which are the most critical for combined GNSS/IMU navigation and may differ or may not be listed in manufacturers’ specifications. Noise characteristics are calculated for the studied sensors and examples of their use in loosely coupled GNSS/IMU processing are assessed. This work proposes a structured and reproducible approach for working with sensors for their use in navigation tasks in combination with GNSS, and can be used for sensors of different levels to supplement missing or incorrect sensor manufacturers’ data.
Full article
(This article belongs to the Special Issue GNSS and Integrated Navigation and Positioning)
►▼
Show Figures
Figure 1
Open AccessArticle
Three-Dimensional Single Random Phase Encryption
by
Byungwoo Cho and Myungjin Cho
Sensors 2024, 24(6), 1952; https://doi.org/10.3390/s24061952 - 19 Mar 2024
Abstract
In this paper, we propose a new optical encryption technique that uses the single random phase mask. In conventional optical encryptions such as double random phase encryption (DRPE), two different random phase masks are required to encrypt the primary data. For decryption, DRPE
[...] Read more.
In this paper, we propose a new optical encryption technique that uses the single random phase mask. In conventional optical encryptions such as double random phase encryption (DRPE), two different random phase masks are required to encrypt the primary data. For decryption, DRPE requires taking the absolute value of the decrypted data because it is complex-valued. In addition, when key information is revealed, the primary data may be reconstructed by attackers. To reduce the number of random phase masks and enhance the security level, in this paper, we propose single random phase encryption (SRPE) with additive white Gaussian noise (AWGN) and volumetric computational reconstruction (VCR) of integral imaging. In our method, even if key information is known, the primary data may not be reconstructed. To enhance the visual quality of the decrypted data by SRPE, multiple observation is utilized. To reconstruct the primary data, we use VCR of integral imaging because it can remove AWGN by average effect. Thus, since the reconstruction depth can be another key piece of information of SRPE, the security level can be enhanced. In addition, it does not require taking the absolute value of the decrypted data for decryption. To verify the validity of our method, we implement the simulation and calculate performance metrics such as peak sidelobe ratio (PSR) and structural similarity (SSIM). In increasing the number of observations, SSIM for the decrypted data can be improved dramatically. Moreover, even if the number of observations is not enough, three-dimensional (3D) data can be decrypted by SRPE at the correct reconstruction depth.
Full article
(This article belongs to the Special Issue Imaging and Sensing in Optics and Photonics)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessing Motor Variability during Squat: The Reliability of Inertial Devices in Resistance Training
by
Fernando García-Aguilar, Miguel López-Fernández, David Barbado, Francisco J. Moreno and Rafael Sabido
Sensors 2024, 24(6), 1951; https://doi.org/10.3390/s24061951 - 19 Mar 2024
Abstract
Movement control can be an indicator of how challenging a task is for the athlete, and can provide useful information to improve training efficiency and prevent injuries. This study was carried out to determine whether inertial measurement units (IMU) can provide reliable information
[...] Read more.
Movement control can be an indicator of how challenging a task is for the athlete, and can provide useful information to improve training efficiency and prevent injuries. This study was carried out to determine whether inertial measurement units (IMU) can provide reliable information on motion variability during strength exercises, focusing on the squat. Sixty-six healthy, strength-trained young adults completed a two-day protocol, where the variability in the squat movement was analyzed at two different loads (30% and 70% of one repetition maximum) using inertial measurement units and a force platform. The time series from IMUs and force platforms were analyzed using linear (standard deviation) and non-linear (detrended fluctuation analysis, sample entropy and fuzzy entropy) measures. Reliability was analyzed for both IMU and force platform using the intraclass correlation coefficient and the standard error of measurement. Standard deviation, detrended fluctuation analysis, sample entropy, and fuzzy entropy from the IMUs time series showed moderate to good reliability values (ICC: 0.50–0.85) and an acceptable error. The study concludes that IMUs are reliable tools for analyzing movement variability in strength exercises, providing accessible options for performance monitoring and training optimization. These findings have implications for the design of more effective strength training programs, emphasizing the importance of movement control in enhancing athletic performance and reducing injury risks.
Full article
(This article belongs to the Special Issue IMU Sensors for Human Activity Monitoring)
►▼
Show Figures
Figure 1
Open AccessArticle
Image Processing Techniques for Improving Quality of 3D Profile in Digital Holographic Microscopy Using Deep Learning Algorithm
by
Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Sensors 2024, 24(6), 1950; https://doi.org/10.3390/s24061950 - 19 Mar 2024
Abstract
Digital Holographic Microscopy (DHM) is a 3D imaging technology widely applied in biology, microelectronics, and medical research. However, the noise generated during the 3D imaging process can affect the accuracy of medical diagnoses. To solve this problem, we proposed several frequency domain filtering
[...] Read more.
Digital Holographic Microscopy (DHM) is a 3D imaging technology widely applied in biology, microelectronics, and medical research. However, the noise generated during the 3D imaging process can affect the accuracy of medical diagnoses. To solve this problem, we proposed several frequency domain filtering algorithms. However, the filtering algorithms we proposed have a limitation in that they can only be applied when the distance between the direct current (DC) spectrum and sidebands are sufficiently far. To address these limitations, among the proposed filtering algorithms, the HiVA algorithm and deep learning algorithm, which effectively filter by distinguishing between noise and detailed information of the object, are used to enable filtering regardless of the distance between the DC spectrum and sidebands. In this paper, a combination of deep learning technology and traditional image processing methods is proposed, aiming to reduce noise in 3D profile imaging using the Improved Denoising Diffusion Probabilistic Models (IDDPM) algorithm.
Full article
(This article belongs to the Special Issue Digital Holography Imaging Techniques and Applications Using Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
5G Indoor Positioning Error Correction Based on 5G-PECNN
by
Shan Yang, Qiyuan Zhang, Longxing Hu, Haina Ye, Xiaobo Wang, Ti Wang and Syuan Liu
Sensors 2024, 24(6), 1949; https://doi.org/10.3390/s24061949 - 19 Mar 2024
Abstract
With the development of the mobile network communication industry, 5G has been widely used in the consumer market, and the application of 5G technology for indoor positioning has emerged. Like most indoor positioning techniques, the propagation of 5G signals in indoor spaces is
[...] Read more.
With the development of the mobile network communication industry, 5G has been widely used in the consumer market, and the application of 5G technology for indoor positioning has emerged. Like most indoor positioning techniques, the propagation of 5G signals in indoor spaces is affected by noise, multipath propagation interference, installation errors, and other factors, leading to errors in 5G indoor positioning. This paper aims to address these issues by first constructing a 5G indoor positioning dataset and analyzing the characteristics of 5G positioning errors. Subsequently, we propose a 5G Positioning Error Correction Neural Network (5G-PECNN) based on neural networks. This network employs a multi-level fusion network structure designed to adapt to the error characteristics of 5G through adaptive gradient descent. Experimental validation demonstrates that the algorithm proposed in this paper achieves superior error correction within the error region, significantly outperforming traditional neural networks.
Full article
(This article belongs to the Topic Artificial Intelligence in Navigation)
►▼
Show Figures
Figure 1
Open AccessArticle
An Algorithm for Soft Sensor Development for a Class of Processes with Distinct Operating Conditions
by
Darko Stanišić, Luka Mejić, Bojan Jorgovanović, Vojin Ilić and Nikola Jorgovanović
Sensors 2024, 24(6), 1948; https://doi.org/10.3390/s24061948 - 19 Mar 2024
Abstract
Soft sensors are increasingly being used to provide important information about production processes that is otherwise only available through off-line laboratory analysis. However, usually, they are developed for a specific application, for which thorough process analysis is performed to provide information for the
[...] Read more.
Soft sensors are increasingly being used to provide important information about production processes that is otherwise only available through off-line laboratory analysis. However, usually, they are developed for a specific application, for which thorough process analysis is performed to provide information for the appropriate selection of model type and model structure. Wide industrial application of soft sensors, however, requires a method for soft sensor development that has a high level of automatism and is applicable to a significant number of industrial processes. A class of processes that is very common in the industry are processes with distinct operating conditions. In this paper, an algorithm that is suitable for the development of soft sensors for this class of processes is presented. The algorithm possesses a high level of automatism, as it requires minimal user engagement regarding the structure of the model, which makes it suitable for implementation as a customary industrial solution. The algorithm is based on a radial basis function artificial neural network, and it enables the automatic selection of the model structure and the determination of model parameters, only based on the training data set. The testing of the presented algorithm is done on the cement production process, since it represents a process with distinct operating conditions. The results of the test show that, besides providing a high level of automatism in model development, the presented algorithm generates a soft sensor with high estimation performance.
Full article
(This article belongs to the Section Sensors and Robotics)
►▼
Show Figures
Figure 1
Open AccessArticle
Iterative Reconstruction of Micro Computed Tomography Scans Using Multiple Heterogeneous GPUs
by
Wen-Hsiang Chou, Cheng-Han Wu, Shih-Chun Jin and Jyh-Cheng Chen
Sensors 2024, 24(6), 1947; https://doi.org/10.3390/s24061947 - 18 Mar 2024
Abstract
Graphics processing units (GPUs) facilitate massive parallelism and high-capacity storage, and thus are suitable for the iterative reconstruction of ultrahigh-resolution micro computed tomography (CT) scans by on-the-fly system matrix (OTFSM) calculation using ordered subsets expectation maximization (OSEM). We propose a finite state automaton
[...] Read more.
Graphics processing units (GPUs) facilitate massive parallelism and high-capacity storage, and thus are suitable for the iterative reconstruction of ultrahigh-resolution micro computed tomography (CT) scans by on-the-fly system matrix (OTFSM) calculation using ordered subsets expectation maximization (OSEM). We propose a finite state automaton (FSA) method that facilitates iterative reconstruction using a heterogeneous multi-GPU platform through parallelizing the matrix calculations derived from a ray tracing system of ordered subsets. The FSAs perform flow control for parallel threading of the heterogeneous GPUs, which minimizes the latency of launching ordered-subsets tasks, reduces the data transfer between the main system memory and local GPU memory, and solves the memory-bound of a single GPU. In the experiments, we compared the operation efficiency of OS-MLTR for three reconstruction environments. The heterogeneous multiple GPUs with job queues for high throughput calculation speed is up to five times faster than the single GPU environment, and that speed up is nine times faster than the heterogeneous multiple GPUs with the FIFO queues of the device scheduling control. Eventually, we proposed an event-triggered FSA method for iterative reconstruction using multiple heterogeneous GPUs that solves the memory-bound issue of a single GPU at ultrahigh resolutions, and the routines of the proposed method were successfully executed on each GPU simultaneously.
Full article
(This article belongs to the Section Biomedical Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Subspace Identification of Bridge Frequencies Based on the Dimensionless Response of a Two-Axle Vehicle
by
Yixin Quan, Qing Zeng, Nan Jin, Yipeng Zhu and Chengyin Liu
Sensors 2024, 24(6), 1946; https://doi.org/10.3390/s24061946 - 18 Mar 2024
Abstract
As an essential reference to bridge dynamic characteristics, the identification of bridge frequencies has far-reaching consequences for the health monitoring and damage evaluation of bridges. This study proposes a uniform scheme to identify bridge frequencies with two different subspace-based methodologies, i.e., an improved
[...] Read more.
As an essential reference to bridge dynamic characteristics, the identification of bridge frequencies has far-reaching consequences for the health monitoring and damage evaluation of bridges. This study proposes a uniform scheme to identify bridge frequencies with two different subspace-based methodologies, i.e., an improved Short-Time Stochastic Subspace Identification (ST-SSI) method and an improved Multivariable Output Error State Space (MOESP) method, by simply adjusting the signal inputs. One of the key features of the proposed scheme is the dimensionless description of the vehicle–bridge interaction system and the employment of the dimensionless response of a two-axle vehicle as the state input, which enhances the robustness of the vehicle properties and speed. Additionally, it establishes the equation of the vehicle biaxial response difference considering the time shift between the front and the rear wheels, theoretically eliminating the road roughness information in the state equation and output signal effectively. The numerical examples discuss the effects of vehicle speeds, road roughness conditions, and ongoing traffic on the bridge identification. According to the dimensionless speed parameter Sv1 of the vehicle, the ST-SSI (Sv1 < 0.1) or MOESP (Sv1 ≥ 0.1) algorithm is applied to extract the frequencies of a simply supported bridge from the dimensionless response of a two-axle vehicle on a single passage. In addition, the proposed methodology is applied to two types of long-span complex bridges. The results show that the proposed approaches exhibit good performance in identifying multi-order frequencies of the bridges, even considering high vehicle speeds, high levels of road surface roughness, and random traffic flows.
Full article
(This article belongs to the Special Issue Advanced Sensing Systems for Structural Monitoring and Damage Identification of Buildings and Bridges)
►▼
Show Figures
Figure 1
Open AccessArticle
Key Contributors to Signal Generation in Frequency Mixing Magnetic Detection (FMMD): An In Silico Study
by
Ulrich M. Engelmann, Beril Simsek, Ahmed Shalaby and Hans-Joachim Krause
Sensors 2024, 24(6), 1945; https://doi.org/10.3390/s24061945 (registering DOI) - 18 Mar 2024
Abstract
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic
[...] Read more.
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes nm with narrow size distributions ( to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Full article
(This article belongs to the Special Issue Advances in Magnetic Sensors and Their Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
A Low-Cost Wearable Device to Estimate Body Temperature Based on Wrist Temperature
by
Marcela E. Mata-Romero, Omar A. Simental-Martínez, Héctor A. Guerrero-Osuna, Luis F. Luque-Vega, Emmanuel Lopez-Neri, Gerardo Ornelas-Vargas, Rodrigo Castañeda-Miranda, Ma. del Rosario Martínez-Blanco, Jesús Antonio Nava-Pintor and Fabián García-Vázquez
Sensors 2024, 24(6), 1944; https://doi.org/10.3390/s24061944 - 18 Mar 2024
Abstract
The remote monitoring of vital signs and healthcare provision has become an urgent necessity due to the impact of the COVID-19 pandemic on the world. Blood oxygen level, heart rate, and body temperature data are crucial for managing the disease and ensuring timely
[...] Read more.
The remote monitoring of vital signs and healthcare provision has become an urgent necessity due to the impact of the COVID-19 pandemic on the world. Blood oxygen level, heart rate, and body temperature data are crucial for managing the disease and ensuring timely medical care. This study proposes a low-cost wearable device employing non-contact sensors to monitor, process, and visualize critical variables, focusing on body temperature measurement as a key health indicator. The wearable device developed offers a non-invasive and continuous method to gather wrist and forehead temperature data. However, since there is a discrepancy between wrist and actual forehead temperature, this study incorporates statistical methods and machine learning to estimate the core forehead temperature from the wrist. This research collects 2130 samples from 30 volunteers, and both the statistical least squares method and machine learning via linear regression are applied to analyze these data. It is observed that all models achieve a significant fit, but the third-degree polynomial model stands out in both approaches. It achieves an R2 value of 0.9769 in the statistical analysis and 0.9791 in machine learning.
Full article
(This article belongs to the Special Issue Advanced Low-Cost Sensing Technology for Exposure and Health Assessments)
►▼
Show Figures
Figure 1
Open AccessArticle
Evaluating Technicians’ Workload and Performance in Diagnosis for Corrective Maintenance
by
Hyunjong Shin, Ling Rothrock and Vittaldas Prabhu
Sensors 2024, 24(6), 1943; https://doi.org/10.3390/s24061943 - 18 Mar 2024
Abstract
The advancement in digital technology is transforming the world. It enables smart product–service systems that improve productivity by changing tasks, processes, and the ways we work. There are great opportunities in maintenance because many tasks require physical and cognitive work, but are still
[...] Read more.
The advancement in digital technology is transforming the world. It enables smart product–service systems that improve productivity by changing tasks, processes, and the ways we work. There are great opportunities in maintenance because many tasks require physical and cognitive work, but are still carried out manually. However, the interaction between a human and a smart system is inevitable, since not all tasks in maintenance can be fully automated. Therefore, we conducted a controlled laboratory experiment to investigate the impact on technicians’ workload and performance due to the introduction of smart technology. Especially, we focused on the effects of different diagnosis support systems on technicians during maintenance activity. We experimented with a model that replicates the key components of a computer numerical control (CNC) machine with a proximity sensor, a component that requires frequent maintenance. Forty-five participants were evenly assigned to three groups: a group that used a Fault-Tree diagnosis support system (FTd-system), a group that used an artificial intelligence diagnosis support system (AId-system), and a group that used neither of the diagnosis support systems. The results show that the group that used the FTd-system completed the task 15% faster than the group that used the AId-system. There was no significant difference in the workload between groups. Further analysis using the NGOMSL model implied that the difference in time to complete was probably due to the difference in system interfaces. In summary, the experimental results and further analysis imply that adopting the new diagnosis support system may improve maintenance productivity by reducing the number of diagnosis attempts without burdening technicians with new workloads. Estimates indicate that the maintenance time and the cognitive load can be reduced by 8.4 s and 15% if only two options are shown in the user interface.
Full article
(This article belongs to the Section Industrial Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Stereo Vision for Plant Detection in Dense Scenes
by
Thijs Ruigrok, Eldert J. van Henten and Gert Kootstra
Sensors 2024, 24(6), 1942; https://doi.org/10.3390/s24061942 - 18 Mar 2024
Abstract
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were
[...] Read more.
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities.
Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
►▼
Show Figures
Figure 1
Open AccessArticle
Estimating Compressive and Shear Forces at L5-S1: Exploring the Effects of Load Weight, Asymmetry, and Height Using Optical and Inertial Motion Capture Systems
by
Iván Nail-Ulloa, Michael Zabala, Richard Sesek, Howard Chen, Mark C. Schall, Jr. and Sean Gallagher
Sensors 2024, 24(6), 1941; https://doi.org/10.3390/s24061941 - 18 Mar 2024
Abstract
This study assesses the agreement of compressive and shear force estimates at the L5-S1 joint using inertial motion capture (IMC) within a musculoskeletal simulation model during manual lifting tasks, compared against a top-down optical motion capture (OMC)-based model. Thirty-six participants completed lifting and
[...] Read more.
This study assesses the agreement of compressive and shear force estimates at the L5-S1 joint using inertial motion capture (IMC) within a musculoskeletal simulation model during manual lifting tasks, compared against a top-down optical motion capture (OMC)-based model. Thirty-six participants completed lifting and lowering tasks while wearing a modified Plug-in Gait marker set for the OMC and a full-body IMC set-up consisting of 17 sensors. The study focused on tasks with variable load weights, lifting heights, and trunk rotation angles. It was found that the IMC system consistently underestimated the compressive forces by an average of 34% (975.16 N) and the shear forces by 30% (291.77 N) compared with the OMC system. A critical observation was the discrepancy in joint angle measurements, particularly in trunk flexion, where the IMC-based model underestimated the angles by 10.92–11.19 degrees on average, with the extremes reaching up to 28 degrees. This underestimation was more pronounced in tasks involving greater flexion, notably impacting the force estimates. Additionally, this study highlights significant differences in the distance from the spine to the box during these tasks. On average, the IMC system showed an 8 cm shorter distance on the X axis and a 12–13 cm shorter distance on the Z axis during lifting and lowering, respectively, indicating a consistent underestimation of the segment length compared with the OMC system. These discrepancies in the joint angles and distances suggest potential limitations of the IMC system’s sensor placement and model scaling. The load weight emerged as the most significant factor affecting force estimates, particularly at lower lifting heights, which involved more pronounced flexion movements. This study concludes that while the IMC system offers utility in ergonomic assessments, sensor placement and anthropometric modeling accuracy enhancements are imperative for more reliable force and kinematic estimations in occupational settings.
Full article
(This article belongs to the Special Issue Wearable Sensors for Gait, Human Motion Analysis and Health Monitoring)
►▼
Show Figures
Figure 1
Open AccessArticle
From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition
by
Kimji N. Pellano, Inga Strümke and Espen A. F. Ihlen
Sensors 2024, 24(6), 1940; https://doi.org/10.3390/s24061940 - 18 Mar 2024
Abstract
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability
[...] Read more.
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR.
Full article
(This article belongs to the Special Issue AI-Enabled Sensing Technology and Data Analysis Techniques for Intelligent Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology
by
Jiuyang Gao, Siyu Li, Wenfeng Xia, Jiuyang Yu and Yaonan Dai
Sensors 2024, 24(6), 1939; https://doi.org/10.3390/s24061939 - 18 Mar 2024
Abstract
With the development of deep learning and sensors and sensor collection methods, computer vision inspection technology has developed rapidly. The deep-learning-based classification algorithm requires the acquisition of a model with superior generalization capabilities through the utilization of a substantial quantity of training samples.
[...] Read more.
With the development of deep learning and sensors and sensor collection methods, computer vision inspection technology has developed rapidly. The deep-learning-based classification algorithm requires the acquisition of a model with superior generalization capabilities through the utilization of a substantial quantity of training samples. However, due to issues such as privacy, annotation costs, and sensor-captured images, how to make full use of limited samples has become a major challenge for practical training and deployment. Furthermore, when simulating models and transferring them to actual image scenarios, discrepancies often arise between the common training sets and the target domain (domain offset). Currently, meta-learning offers a promising solution for few-shot learning problems. However, the quantity of supporting set data on the target domain remains limited, leading to limited cross-domain learning effectiveness. To address this challenge, we have developed a self-distillation and mixing (SDM) method utilizing a Teacher–Student framework. This method effectively transfers knowledge from the source domain to the target domain by applying self-distillation techniques and mixed data augmentation, learning better image representations from relatively abundant datasets, and achieving fine-tuning in the target domain. In comparison with nine classical models, the experimental results demonstrate that the SDM method excels in terms of training time and accuracy. Furthermore, SDM effectively transfers knowledge from the source domain to the target domain, even with a limited number of target domain samples.
Full article
(This article belongs to the Section Intelligent Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Design of a Multi-Position Alignment Scheme
by
Bofan Guan, Zhongping Liu, Dong Wei and Qiangwen Fu
Sensors 2024, 24(6), 1938; https://doi.org/10.3390/s24061938 - 18 Mar 2024
Abstract
The current new type of inertial navigation system, including rotating inertial navigation systems and three-autonomy inertial navigation systems, has been increasingly widely applied. Benefited by the rotating mechanisms of these inertial navigation systems, alignment accuracy can be significantly enhanced by implementing IMU (Inertial
[...] Read more.
The current new type of inertial navigation system, including rotating inertial navigation systems and three-autonomy inertial navigation systems, has been increasingly widely applied. Benefited by the rotating mechanisms of these inertial navigation systems, alignment accuracy can be significantly enhanced by implementing IMU (Inertial Measurement Unit) rotation during the alignment process. The principle of suppressing initial alignment errors using rotational modulation technology was investigated, and the impact of various component error terms on alignment accuracy of IMU during rotation was analyzed. A corresponding error suppression scheme was designed to overcome the shortcoming of the significant scale factor error of fiber optic gyroscopes, and the research content of this paper is validated through corresponding simulations and experiments. The results indicate that the designed alignment scheme can effectively suppress the gyro scale factor error introduced by angular motion and improve alignment accuracy.
Full article
(This article belongs to the Special Issue Challenges and Future Trends of Inertial Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
RU-SLAM: A Robust Deep-Learning Visual Simultaneous Localization and Mapping (SLAM) System for Weakly Textured Underwater Environments
by
Zhuo Wang, Qin Cheng and Xiaokai Mu
Sensors 2024, 24(6), 1937; https://doi.org/10.3390/s24061937 - 18 Mar 2024
Abstract
Accurate and robust simultaneous localization and mapping (SLAM) systems are crucial for autonomous underwater vehicles (AUVs) to perform missions in unknown environments. However, directly applying deep learning-based SLAM methods to underwater environments poses challenges due to weak textures, image degradation, and the inability
[...] Read more.
Accurate and robust simultaneous localization and mapping (SLAM) systems are crucial for autonomous underwater vehicles (AUVs) to perform missions in unknown environments. However, directly applying deep learning-based SLAM methods to underwater environments poses challenges due to weak textures, image degradation, and the inability to accurately annotate keypoints. In this paper, a robust deep-learning visual SLAM system is proposed. First, a feature generator named UWNet is designed to address weak texture and image degradation problems and extract more accurate keypoint features and their descriptors. Further, the idea of knowledge distillation is introduced based on an improved underwater imaging physical model to train the network in a self-supervised manner. Finally, UWNet is integrated into the ORB-SLAM3 to replace the traditional feature extractor. The extracted local and global features are respectively utilized in the feature tracking and closed-loop detection modules. Experimental results on public datasets and self-collected pool datasets verify that the proposed system maintains high accuracy and robustness in complex scenarios.
Full article
(This article belongs to the Special Issue Sensors, Modeling and Control for Intelligent Marine Robots)
►▼
Show Figures
Figure 1
Open AccessArticle
A Deep Learning Approach for Surface Crack Classification and Segmentation in Unmanned Aerial Vehicle Assisted Infrastructure Inspections
by
Shamendra Egodawela, Amirali Khodadadian Gostar, H. A. D. Samith Buddika, A. J. Dammika, Nalin Harischandra, Satheeskumar Navaratnam and Mojtaba Mahmoodian
Sensors 2024, 24(6), 1936; https://doi.org/10.3390/s24061936 - 18 Mar 2024
Abstract
Surface crack detection is an integral part of infrastructure health surveys. This work presents a transformative shift towards rapid and reliable data collection capabilities, dramatically reducing the time spent on inspecting infrastructures. Two unmanned aerial vehicles (UAVs) were deployed, enabling the capturing of
[...] Read more.
Surface crack detection is an integral part of infrastructure health surveys. This work presents a transformative shift towards rapid and reliable data collection capabilities, dramatically reducing the time spent on inspecting infrastructures. Two unmanned aerial vehicles (UAVs) were deployed, enabling the capturing of images simultaneously for efficient coverage of the structure. The suggested drone hardware is especially suitable for the inspection of infrastructure with confined spaces that UAVs with a broader footprint are incapable of accessing due to a lack of safe access or positioning data. The collected image data were analyzed using a binary classification convolutional neural network (CNN), effectively filtering out images containing cracks. A comparison of state-of-the-art CNN architectures against a novel CNN layout “CrackClassCNN” was investigated to obtain the optimal layout for classification. A Segment Anything Model (SAM) was employed to segment defect areas, and its performance was benchmarked against manually annotated images. The suggested “CrackClassCNN” achieved an accuracy rate of 95.02%, and the SAM segmentation process yielded a mean Intersection over Union (IoU) score of 0.778 and an F1 score of 0.735. It was concluded that the selected UAV platform, the communication network, and the suggested processing techniques were highly effective in surface crack detection.
Full article
(This article belongs to the Topic AI Enhanced Civil Infrastructure Safety)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Sensors Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 24 (2024)
- Vol. 23 (2023)
- Vol. 22 (2022)
- Vol. 21 (2021)
- Vol. 20 (2020)
- Vol. 19 (2019)
- Vol. 18 (2018)
- Vol. 17 (2017)
- Vol. 16 (2016)
- Vol. 15 (2015)
- Vol. 14 (2014)
- Vol. 13 (2013)
- Vol. 12 (2012)
- Vol. 11 (2011)
- Vol. 10 (2010)
- Vol. 9 (2009)
- Vol. 8 (2008)
- Vol. 7 (2007)
- Vol. 6 (2006)
- Vol. 5 (2005)
- Vol. 4 (2004)
- Vol. 3 (2003)
- Vol. 2 (2002)
- Vol. 1 (2001)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Aerospace, Applied Sciences, Remote Sensing, Sensors, Universe, Data
Techniques and Science Exploitations for Earth Observation and Planetary Exploration
Topic Editors: Yu Tao, Siting Xiong, Rui SongDeadline: 31 March 2024
Topic in
Agriculture, Forests, Sensors
Metrology-Assisted Production in Agriculture and Forestry
Topic Editors: Heye Bogena, Cosimo Brogi, Christof Huebner, Andreas PanagopoulosDeadline: 30 April 2024
Topic in
Materials, Nanomaterials, Photonics, Polymers, Applied Sciences, Sensors
Optical and Optoelectronic Properties of Materials and Their Applications
Topic Editors: Zhiping Luo, Gibin George, Navadeep ShrivastavaDeadline: 20 May 2024
Topic in
Remote Sensing, Sensors, Smart Cities, Vehicles, Geomatics
Information Sensing Technology for Intelligent/Driverless Vehicle, 2nd Volume
Topic Editors: Yan Huang, Yi Ren, Penghui Huang, Jun Wan, Zhanye Chen, Shiyang TangDeadline: 31 May 2024
Conferences
Special Issues
Special Issue in
Sensors
Advanced Management of Fog/Edge Networks and IoT Sensors Devices
Guest Editor: Rocío Pérez de PradoDeadline: 25 March 2024
Special Issue in
Sensors
Advanced Wireless Sensor Network Deployment in Smart Cities, Industry 4.0, and Agriculture 4.0
Guest Editors: Rafael Asorey-Cacheda, Antonio-Javier Garcia-Sanchez, Joan García-Haro, Claudia Liliana ZunigaDeadline: 31 March 2024
Special Issue in
Sensors
Implanted and Wearable Body Sensors Network
Guest Editors: Somdip Dey, Delaram Jarchi, Xiaojun ZhaiDeadline: 25 April 2024
Special Issue in
Sensors
Smart Sensors for Remotely Operated Robots
Guest Editors: Liviu C. Miclea, Ovidiu P. Stan, Vlad Muresan, Florin PopDeadline: 30 April 2024
Topical Collections
Topical Collection in
Sensors
Robotic and Sensor Technologies in Environmental Exploration and Monitoring
Collection Editors: Jacopo Aguzzi, Corrado Costa, Sergio Stefanni, Valerio Funari
Topical Collection in
Sensors
Microfluidic Sensors
Collection Editors: Sabina Merlo, Klaus Stefan Drese