Journal Description
Sensors
Sensors
is an international, peer-reviewed, open access journal on the science and technology of sensors. Sensors is published semimonthly online by MDPI. The Polish Society of Applied Electromagnetics (PTZE), Japan Society of Photogrammetry and Remote Sensing (JSPRS), Spanish Society of Biomedical Engineering (SEIB) and International Society for the Measurement of Physical Behaviour (ISMPB) are affiliated with Sensors and their members receive a discount on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, MEDLINE, PMC, Ei Compendex, Inspec, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Instruments & Instrumentation) / CiteScore - Q1 (Instrumentation)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 2.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Sensors.
- Companion journals for Sensors include: Chips, Automation, JCP and Targets.
Impact Factor:
3.9 (2022);
5-Year Impact Factor:
4.1 (2022)
Latest Articles
Exploring Biosensors’ Scientific Production and Research Patterns: A Bibliometric Analysis
Sensors 2024, 24(10), 3082; https://doi.org/10.3390/s24103082 (registering DOI) - 12 May 2024
Abstract
More sustainable biosensor production is growing in importance, allowing for the development of technological solutions for several industries, such as those in the health, chemical, and food sectors. Tracking the latest advancements in biosensors’ scientific production is fundamental to determining the opportunities for
[...] Read more.
More sustainable biosensor production is growing in importance, allowing for the development of technological solutions for several industries, such as those in the health, chemical, and food sectors. Tracking the latest advancements in biosensors’ scientific production is fundamental to determining the opportunities for the future of the biosensing field. This article aims to map scientific production in the biosensors field by running a bibliometric analysis of journal articles registered in the Web of Science database under biosensor-related vital concepts. The key concepts were selected by researchers and biosensor technology developers working on the BioAssembler Horizon project. The findings lead to identifying the scientific and technological knowledge base on biosensing devices and tracking the main scientific organisations developing this technology throughout the COVID-19 period (2019–2023). The institutional origin of the publications characterised the global distribution of related knowledge competencies and research partnerships. These results are discussed, shedding light on the scientific, economic, political, and structural factors that contribute to the formation of a scientific knowledge-based focus on the performance and design of these sensors. Moreover, the lack of scientific ties between the three axes of organisations producing expertise in this area (China, USA, and Russia) points towards the need to find synergies through new mechanisms of co-authorship and collaboration.
Full article
(This article belongs to the Section Biosensors)
►
Show Figures
Open AccessArticle
Crystallization-Inspired Design and Modeling of Self-Assembly Lattice-Formation Swarm Robotics
by
Zebang Pan, Guilin Wen, Hanfeng Yin, Shan Yin and Zhao Tan
Sensors 2024, 24(10), 3081; https://doi.org/10.3390/s24103081 (registering DOI) - 12 May 2024
Abstract
Self-assembly formation is a key research topic for realizing practical applications in swarm robotics. Due to its inherent complexity, designing high-performance self-assembly formation strategies and proposing corresponding macroscopic models remain formidable challenges and present an open research frontier. Taking inspiration from crystallization, this
[...] Read more.
Self-assembly formation is a key research topic for realizing practical applications in swarm robotics. Due to its inherent complexity, designing high-performance self-assembly formation strategies and proposing corresponding macroscopic models remain formidable challenges and present an open research frontier. Taking inspiration from crystallization, this paper introduces a distributed self-assembly formation strategy by defining free, moving, growing, and solid states for robots. Robots in these states can spontaneously organize into user-specified two-dimensional shape formations with lattice structures through local interactions and communications. To address the challenges posed by complex spatial structures in modeling a macroscopic model, this work introduces the structural features estimation method. Subsequently, a corresponding non-spatial macroscopic model is developed to predict and analyze the self-assembly behavior, employing the proposed estimation method and a stock and flow diagram. Real-robot experiments and simulations validate the flexibility, scalability, and high efficiency of the proposed self-assembly formation strategy. Moreover, extensive experimental and simulation results demonstrate the model’s accuracy in predicting the self-assembly process under different conditions. Model-based analysis indicates that the proposed self-assembly formation strategy can fully utilize the performance of individual robots and exhibits strong self-stability.
Full article
(This article belongs to the Special Issue Advances in Mobile Robot Perceptions, Planning, Control and Learning: 2nd Edition)
Open AccessArticle
Broken Rotor Bar Detection Based on Steady-State Stray Flux Signals Using Triaxial Sensor with Random Positioning
by
Marko Zubčić, Ivan Pavić, Petar Matić and Adam Polak
Sensors 2024, 24(10), 3080; https://doi.org/10.3390/s24103080 (registering DOI) - 12 May 2024
Abstract
This paper investigates the detection of broken rotor bar in squirrel cage induction motors using a novel approach of randomly positioning a triaxial sensor over the motor surface. This study is conducted on two motors under laboratory conditions, where one motor is kept
[...] Read more.
This paper investigates the detection of broken rotor bar in squirrel cage induction motors using a novel approach of randomly positioning a triaxial sensor over the motor surface. This study is conducted on two motors under laboratory conditions, where one motor is kept in a healthy state, and the other is subjected to a broken rotor bar (BRB) fault. The induced electromotive force of the triaxial coils, recorded over ten days with 100 measurements per day, is statistically analyzed. Normality tests and graphical interpretation methods are used to evaluate the data distribution. Parametric and non-parametric approaches are used to analyze the data. Both approaches show that the measurement method is valid and consistent over time and statistically distinguishes healthy motors from those with BRB defects when a reference or threshold value is specified. While the comparison between healthy motors shows a discrepancy, the quantitative analysis shows a smaller estimated difference in mean values between healthy motors than comparing healthy and BRB motors.
Full article
(This article belongs to the Section Physical Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
A Vision/Inertial Navigation/Global Navigation Satellite Integrated System for Relative and Absolute Localization in Land Vehicles
by
Yao Zhang, Liang Chu, Yabin Mao, Xintong Yu, Jiawei Wang and Chong Guo
Sensors 2024, 24(10), 3079; https://doi.org/10.3390/s24103079 (registering DOI) - 12 May 2024
Abstract
This paper presents an enhanced ground vehicle localization method designed to address the challenges associated with state estimation for autonomous vehicles operating in diverse environments. The focus is specifically on the precise localization of position and orientation in both local and global coordinate
[...] Read more.
This paper presents an enhanced ground vehicle localization method designed to address the challenges associated with state estimation for autonomous vehicles operating in diverse environments. The focus is specifically on the precise localization of position and orientation in both local and global coordinate systems. The proposed approach integrates local estimates generated by existing visual–inertial odometry (VIO) methods into global position information obtained from the Global Navigation Satellite System (GNSS). This integration is achieved through optimizing fusion in a pose graph, ensuring precise local estimation and drift-free global position estimation. Considering the inherent complexities in autonomous driving scenarios, such as the potential failures of a visual–inertial navigation system (VINS) and restrictions on GNSS signals in urban canyons, leading to disruptions in localization outcomes, we introduce an adaptive fusion mechanism. This mechanism allows seamless switching between three modes: utilizing only VINS, using only GNSS, and normal fusion. The effectiveness of the proposed algorithm is demonstrated through rigorous testing in the Carla simulation environment and challenging UrbanNav scenarios. The evaluation includes both qualitative and quantitative analyses, revealing that the method exhibits robustness and accuracy.
Full article
(This article belongs to the Section Navigation and Positioning)
Open AccessArticle
Multi-Task Scenario Encrypted Traffic Classification and Parameter Analysis
by
Guanyu Wang and Yijun Gu
Sensors 2024, 24(10), 3078; https://doi.org/10.3390/s24103078 (registering DOI) - 12 May 2024
Abstract
The widespread use of encrypted traffic poses challenges to network management and network security. Traditional machine learning-based methods for encrypted traffic classification no longer meet the demands of management and security. The application of deep learning technology in encrypted traffic classification significantly improves
[...] Read more.
The widespread use of encrypted traffic poses challenges to network management and network security. Traditional machine learning-based methods for encrypted traffic classification no longer meet the demands of management and security. The application of deep learning technology in encrypted traffic classification significantly improves the accuracy of models. This study focuses primarily on encrypted traffic classification in the fields of network analysis and network security. To address the shortcomings of existing deep learning-based encrypted traffic classification methods in terms of computational memory consumption and interpretability, we introduce a Parameter-Efficient Fine-Tuning method for efficiently tuning the parameters of an encrypted traffic classification model. Experimentation is conducted on various classification scenarios, including Tor traffic service classification and malicious traffic classification, using multiple public datasets. Fair comparisons are made with state-of-the-art deep learning model architectures. The results indicate that the proposed method significantly reduces the scale of fine-tuning parameters and computational resource usage while achieving performance comparable to that of the existing best models. Furthermore, we interpret the learning mechanism of encrypted traffic representation in the pre-training model by analyzing the parameters and structure of the model. This comparison validates the hypothesis that the model exhibits hierarchical structure, clear organization, and distinct features.
Full article
(This article belongs to the Section Sensor Networks)
Open AccessArticle
Image Fusion Method Based on Snake Visual Imaging Mechanism and PCNN
by
Qiang Wang, Xuezhi Yan, Wenjie Xie and Yong Wang
Sensors 2024, 24(10), 3077; https://doi.org/10.3390/s24103077 (registering DOI) - 12 May 2024
Abstract
The process of image fusion is the process of enriching an image and improving the image’s quality, so as to facilitate the subsequent image processing and analysis. With the increasing importance of image fusion technology, the fusion of infrared and visible images has
[...] Read more.
The process of image fusion is the process of enriching an image and improving the image’s quality, so as to facilitate the subsequent image processing and analysis. With the increasing importance of image fusion technology, the fusion of infrared and visible images has received extensive attention. In today’s deep learning environment, deep learning is widely used in the field of image fusion. However, in some applications, it is not possible to obtain a large amount of training data. Because some special organs of snakes can receive and process infrared information and visible information, the fusion method of infrared and visible light to simulate the visual mechanism of snakes came into being. Therefore, this paper takes into account the perspective of visual bionics to achieve image fusion; such methods do not need to obtain a significant amount of training data. However, most of the fusion methods for simulating snakes face the problem of unclear details, so this paper combines this method with a pulse coupled neural network (PCNN). By studying two receptive field models of retinal nerve cells, six dual-mode cell imaging mechanisms of rattlesnakes and their mathematical models and the PCNN model, an improved fusion method of infrared and visible images was proposed. For the proposed fusion method, eleven groups of source images were used, and three non-reference image quality evaluation indexes were compared with seven other fusion methods. The experimental results show that the improved algorithm proposed in this paper is better overall than the comparison method for the three evaluation indexes.
Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
►▼
Show Figures
Figure 1
Open AccessArticle
RBS and ABS Coordinated Control Strategy Based on Explicit Model Predictive Control
by
Liang Chu, Jinwei Li, Zhiqi Guo, Zewei Jiang, Shibo Li, Weiming Du, Yilin Wang and Chong Guo
Sensors 2024, 24(10), 3076; https://doi.org/10.3390/s24103076 (registering DOI) - 12 May 2024
Abstract
During the braking process of electric vehicles, both the regenerative braking system (RBS) and anti-lock braking system (ABS) modulate the hydraulic braking force, leading to control conflict that impacts the effectiveness and real-time capability of coordinated control. Aiming to enhance the coordinated control
[...] Read more.
During the braking process of electric vehicles, both the regenerative braking system (RBS) and anti-lock braking system (ABS) modulate the hydraulic braking force, leading to control conflict that impacts the effectiveness and real-time capability of coordinated control. Aiming to enhance the coordinated control effectiveness of RBS and ABS within the electro-hydraulic composite braking system, this paper proposes a coordinated control strategy based on explicit model predictive control (eMPC-CCS). Initially, a comprehensive braking control framework is established, combining offline adaptive control law generation, online optimized control law application, and state compensation to effectively coordinate braking force through the electro-hydraulic system. During offline processing, eMPC generates a real-time-oriented state feedback control law based on real-world micro trip segments, improving the adaptiveness of the braking strategy across different driving conditions. In the online implementation, the developed three-dimensional eMPC control laws, corresponding to current driving conditions, are invoked, thereby enhancing the potential for real-time braking strategy implementation. Moreover, the state error compensator is integrated into eMPC-CCS, yielding a state gain matrix that optimizes the vehicle braking status and ensures robustness across diverse braking conditions. Lastly, simulation evaluation and hardware-in-the-loop (HIL) testing manifest that the proposed eMPC-CCS effectively coordinates the regenerative and hydraulic braking systems, outperforming other CCSs in terms of braking energy recovery and real-time capability.
Full article
(This article belongs to the Special Issue Integrated Control and Sensing Technology for Electric Vehicles)
Open AccessArticle
Real-Time Recognition Algorithm of Small Target for UAV Infrared Detection
by
Qianqian Zhang, Li Zhou and Junshe An
Sensors 2024, 24(10), 3075; https://doi.org/10.3390/s24103075 (registering DOI) - 12 May 2024
Abstract
Unmanned Aerial Vehicle (UAV) infrared detection has problems such as weak and small targets, complex backgrounds, and poor real-time detection performance. It is difficult for general target detection algorithms to achieve the requirements of a high detection rate, low missed detection rate, and
[...] Read more.
Unmanned Aerial Vehicle (UAV) infrared detection has problems such as weak and small targets, complex backgrounds, and poor real-time detection performance. It is difficult for general target detection algorithms to achieve the requirements of a high detection rate, low missed detection rate, and high real-time performance. In order to solve these problems, this paper proposes an improved small target detection method based on Picodet. First, to address the problem of poor real-time performance, an improved lightweight LCNet network was introduced as the backbone network for feature extraction. Secondly, in order to solve the problems of high false detection rate and missed detection rate due to weak targets, the Squeeze-and-Excitation module was added and the feature pyramid structure was improved. Experimental results obtained on the HIT-UAV public dataset show that the improved detection model’s real-time frame rate increased by 31 fps and the average accuracy (MAP) increased by 7%, which proves the effectiveness of this method for UAV infrared small target detection.
Full article
(This article belongs to the Topic Innovation and Inventions in Aerospace and UAV Applications)
Open AccessArticle
Automated Vibroacoustic Monitoring of Trees for Borer Infestation
by
Ilyas Potamitis and Iraklis Rigakis
Sensors 2024, 24(10), 3074; https://doi.org/10.3390/s24103074 (registering DOI) - 12 May 2024
Abstract
In previous research, we presented an apparatus designed for comprehensive and systematic surveillance of trees against borers. This apparatus entailed the insertion of an uncoated waveguide into the tree trunk, enabling the transmission of micro-vibrations generated by moving or digging larvae to a
[...] Read more.
In previous research, we presented an apparatus designed for comprehensive and systematic surveillance of trees against borers. This apparatus entailed the insertion of an uncoated waveguide into the tree trunk, enabling the transmission of micro-vibrations generated by moving or digging larvae to a piezoelectric probe. Subsequent recordings were then transmitted at predetermined intervals to a server, where analysis was conducted manually to assess the infestation status of the tree. However, this method is hampered by significant limitations when scaling to monitor thousands of trees across extensive spatial domains. In this study, we address this challenge by integrating signal processing techniques capable of distinguishing vibrations attributable to borers from those originating externally to the tree. Our primary innovation involves quantifying the impulses resulting from the fracturing of wood fibers due to borer activity. The device employs criteria such as impulse duration and a strategy of waiting for periods of relative quietness before commencing the counting of impulses. Additionally, we provide an annotated large-scale database comprising laboratory and field vibrational recordings, which will facilitate further advancements in this research domain.
Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
►▼
Show Figures
Figure 1
Open AccessArticle
Engineering the Signal Resolution of a Paper-Based Cell-Free Glutamine Biosensor with Genetic Engineering, Metabolic Engineering, and Process Optimization
by
Tyler J. Free, Joseph P. Talley, Chad D. Hyer, Catherine J. Miller, Joel S. Griffitts and Bradley C. Bundy
Sensors 2024, 24(10), 3073; https://doi.org/10.3390/s24103073 (registering DOI) - 12 May 2024
Abstract
Specialized cancer treatments have the potential to exploit glutamine dependence to increase patient survival rates. Glutamine diagnostics capable of tracking a patient’s response to treatment would enable a personalized treatment dosage to optimize the tradeoff between treatment success and dangerous side effects. Current
[...] Read more.
Specialized cancer treatments have the potential to exploit glutamine dependence to increase patient survival rates. Glutamine diagnostics capable of tracking a patient’s response to treatment would enable a personalized treatment dosage to optimize the tradeoff between treatment success and dangerous side effects. Current clinical glutamine testing requires sophisticated and expensive lab-based tests, which are not broadly available on a frequent, individualized basis. To address the need for a low-cost, portable glutamine diagnostic, this work engineers a cell-free glutamine biosensor to overcome assay background and signal-to-noise limitations evident in previously reported studies. The findings from this work culminate in the development of a shelf-stable, paper-based, colorimetric glutamine test with a high signal strength and a high signal-to-background ratio for dramatically improved signal resolution. While the engineered glutamine test is important progress towards improving the management of cancer and other health conditions, this work also expands the assay development field of the promising cell-free biosensing platform, which can facilitate the low-cost detection of a broad variety of target molecules with high clinical value.
Full article
(This article belongs to the Special Issue Recent Innovations in Electrochemical Biosensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Extended-Aperture Shape Measurements Using Spatially Partially Coherent Illumination (ExASPICE)
by
Mostafa Agour, Claas Falldorf and Ralf B. Bergmann
Sensors 2024, 24(10), 3072; https://doi.org/10.3390/s24103072 (registering DOI) - 12 May 2024
Abstract
We have recently demonstrated that the 3D shape of micro-parts can be measured using LED illumination based on speckle contrast evaluation in the recently developed SPICE profilometry (shape measurements based on imaging with spatially partially coherent illumination). The main advantage of SPICE is
[...] Read more.
We have recently demonstrated that the 3D shape of micro-parts can be measured using LED illumination based on speckle contrast evaluation in the recently developed SPICE profilometry (shape measurements based on imaging with spatially partially coherent illumination). The main advantage of SPICE is its improved robustness and measurement speed compared to confocal or white light interferometry. The limited spatial coherence of the LED illumination is used for depth discrimination. An electrically tunable lens in a -configuration is used for fast depth scanning without mechanically moving parts. The approach is efficient, takes less than a second to capture required images, is eye-safe and offers a depth of focus of a few millimeters. However, SPICE’s main limitation is its assumption of a small illumination aperture. Such a small illumination aperture affects the axial scan resolution, which dominates the measurement uncertainty. In this paper, we propose a novel method to overcome the aperture angle limitation of SPICE by illuminating the object from different directions with several independent LED sources. This approach reduces the full width at half maximum of the contrast envelope to one-eighth, resulting in a twofold improvement in measurement accuracy. As a proof of concept, shape measurements of various metal objects are presented.
Full article
(This article belongs to the Section Sensing and Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Full-Scale Modeling and FBGs Experimental Measurements for Thermal Analysis of Converter Transformer
by
Fan Yang, Sance Gao, Gepeng Wang, Hanxue Hao and Pengbo Wang
Sensors 2024, 24(10), 3071; https://doi.org/10.3390/s24103071 (registering DOI) - 12 May 2024
Abstract
As the imbalance between power demand and load capacity in electrical systems becomes increasingly severe, investigating the temperature variations in transformers under different load stresses is crucial for ensuring their safe operation. The thermal analysis of converter transformers poses challenges due to the
[...] Read more.
As the imbalance between power demand and load capacity in electrical systems becomes increasingly severe, investigating the temperature variations in transformers under different load stresses is crucial for ensuring their safe operation. The thermal analysis of converter transformers poses challenges due to the complexity of model construction. This paper develops a full-scale model of a converter transformer using a multi-core high-performance computer and explores its thermal state at 80%, 100%, and 120% loading ratios using the COUPLED iteration method. Additionally, to validate the simulation model, 24 FBGs are installed in the experimental transformer to record the temperature data. The results indicate a general upward trend in winding the temperature from bottom to top. However, an internal temperature rise followed by a decrease is observed within certain sections. Moreover, as the loading ratio increases, both the peak temperature and temperature differential of the transformer windings rise, reaching a peak temperature of 107.9 °C at a 120% loading ratio. The maximum discrepancy between the simulation and experimental results does not exceed 3.5%, providing effective guidance for the transformer design and operational maintenance.
Full article
(This article belongs to the Special Issue Fiber Grating Sensors and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
A Novel Lightweight Model for Underwater Image Enhancement
by
Botao Liu, Yimin Yang, Ming Zhao and Min Hu
Sensors 2024, 24(10), 3070; https://doi.org/10.3390/s24103070 (registering DOI) - 11 May 2024
Abstract
Underwater images suffer from low contrast and color distortion. In order to improve the quality of underwater images and reduce storage and computational resources, this paper proposes a lightweight model Rep-UWnet to enhance underwater images. The model consists of a fully connected convolutional
[...] Read more.
Underwater images suffer from low contrast and color distortion. In order to improve the quality of underwater images and reduce storage and computational resources, this paper proposes a lightweight model Rep-UWnet to enhance underwater images. The model consists of a fully connected convolutional network and three densely connected RepConv blocks in series, with the input images connected to the output of each block with a Skip connection. First, the original underwater image is subjected to feature extraction by the SimSPPF module and is processed through feature summation with the original one to be produced as the input image. Then, the first convolutional layer with a kernel size of 3 × 3, generates 64 feature maps, and the multi-scale hybrid convolutional attention module enhances the useful features by reweighting the features of different channels. Second, three RepConv blocks are connected to reduce the number of parameters in extracting features and increase the test speed. Finally, a convolutional layer with 3 kernels generates enhanced underwater images. Our method reduces the number of parameters from 2.7 M to 0.45 M (around 83% reduction) but outperforms state-of-the-art algorithms by extensive experiments. Furthermore, we demonstrate our Rep-UWnet effectively improves high-level vision tasks like edge detection and single image depth estimation. This method not only surpasses the contrast method in objective quality, but also significantly improves the contrast, colorimetry, and clarity of underwater images in subjective quality.
Full article
(This article belongs to the Collection Advances in Deep-Learning-Based Sensing, Imaging, and Video Processing)
Open AccessArticle
Self-Calibration Method for Circular Encoders Based on Inertia and a Single Read-Head
by
Xiaoyi Wang, Longyuan Xiao, Kunlei Zheng, Chengxiang Zhao, Mingkang Liu, Tianyang Yao, Dongjie Zhu, Gaojie Liang and Zhaoyao Shi
Sensors 2024, 24(10), 3069; https://doi.org/10.3390/s24103069 (registering DOI) - 11 May 2024
Abstract
This article proposes a new self-calibration method for circular encoders based on inertia and a single read-head. The velocity curves of the circular encoder are fitted with polynomials and, based on the principle of circle closure and the periodicity of the distribution for
[...] Read more.
This article proposes a new self-calibration method for circular encoders based on inertia and a single read-head. The velocity curves of the circular encoder are fitted with polynomials and, based on the principle of circle closure and the periodicity of the distribution for angle intervals, the proportionality between the theoretical value and the actual value of each angle interval is obtained. In the experimental system constructed, the feasibility of the proposed method was verified through self-calibration experiments, repeatability experiments, and comparative experiments with the time-measurement dynamic reversal (TDR) method. In addition, this article also proposes an iterative method to improve the self-calibration accuracy. Experimental verification was carried out, and the results show that the new method can effectively compensate for the error of angle measurement in the circular encoder. The peak-to-peak value of the error of angle measurement was reduced from 239.343” to 11.867”, and the repeatability of the calibration results of the new method was less than 2.77”.
Full article
(This article belongs to the Section Optical Sensors)
Open AccessArticle
Electrical Sensor Calibration by Fuzzy Clustering with Mandatory Constraint
by
Shihong Yue, Keyi Fu, Liping Liu and Yuwei Zhao
Sensors 2024, 24(10), 3068; https://doi.org/10.3390/s24103068 (registering DOI) - 11 May 2024
Abstract
Electrical tomography sensors have been widely used for pipeline parameter detection and estimation. Before they can be used in formal applications, the sensors must be calibrated using enough labeled data. However, due to the high complexity of actual measuring environments, the calibrated sensors
[...] Read more.
Electrical tomography sensors have been widely used for pipeline parameter detection and estimation. Before they can be used in formal applications, the sensors must be calibrated using enough labeled data. However, due to the high complexity of actual measuring environments, the calibrated sensors are inaccurate since the labeling data may be uncertain, inconsistent, incomplete, or even invalid. Alternatively, it is always possible to obtain partial data with accurate labels, which can form mandatory constraints to correct errors in other labeling data. In this paper, a semi-supervised fuzzy clustering algorithm is proposed, and the fuzzy membership degree in the algorithm leads to a set of mandatory constraints to correct these inaccurate labels. Experiments in a dredger validate the proposed algorithm in terms of its accuracy and stability. This new fuzzy clustering algorithm can generally decrease the error of labeling data in any sensor calibration process.
Full article
(This article belongs to the Section Electronic Sensors)
Open AccessArticle
High-Voltage Cable Buffer Layer Ablation Fault Identification Based on Artificial Intelligence and Frequency Domain Impedance Spectroscopy
by
Jiajun Liu, Mingchao Ma, Xin Liu and Haokun Xu
Sensors 2024, 24(10), 3067; https://doi.org/10.3390/s24103067 (registering DOI) - 11 May 2024
Abstract
In recent years, the occurrence of high-voltage cable buffer layer ablation faults has become frequent, posing a serious threat to the safe and stable operation of cables. Failure to promptly detect and address such faults may lead to cable breakdowns, impacting the normal
[...] Read more.
In recent years, the occurrence of high-voltage cable buffer layer ablation faults has become frequent, posing a serious threat to the safe and stable operation of cables. Failure to promptly detect and address such faults may lead to cable breakdowns, impacting the normal operation of the power system. To overcome the limitations of existing methods for identifying buffer layer ablation faults in high-voltage cables, a method for identifying buffer layer ablation faults based on frequency domain impedance spectroscopy and artificial intelligence is proposed. Firstly, based on the cable distributed parameter model and frequency domain impedance spectroscopy, a mathematical model of the input impedance of a cable containing buffer layer ablation faults is derived. Through a simulation, the input impedance spectroscopy at the first end of the cables under normal conditions, buffer layer ablation, local aging, and inductive faults is performed, enabling the identification of inductive and capacitive faults through a comparative analysis. Secondly, the frequency domain amplitude spectroscopy of the buffer layer ablation and local aging faults are used as datasets and are input into a neural network model for training and validation to identify buffer layer ablation and local aging faults. Finally, using multiple evaluation metrics to assess the neural network model validates the superiority of the MLP neural network in cable fault identification models and experimentally confirms the effectiveness of the proposed method.
Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Automatic Localization of Soybean Seedlings Based on Crop Signaling and Multi-View Imaging
by
Bo Jiang, He-Yi Zhang and Wen-Hao Su
Sensors 2024, 24(10), 3066; https://doi.org/10.3390/s24103066 (registering DOI) - 11 May 2024
Abstract
Soybean is grown worldwide for its high protein and oil content. Weeds compete fiercely for resources, which affects soybean yields. Because of the progressive enhancement of weed resistance to herbicides and the quickly increasing cost of manual weeding, mechanical weed control is becoming
[...] Read more.
Soybean is grown worldwide for its high protein and oil content. Weeds compete fiercely for resources, which affects soybean yields. Because of the progressive enhancement of weed resistance to herbicides and the quickly increasing cost of manual weeding, mechanical weed control is becoming the preferred method of weed control. Mechanical weed control finds it difficult to remove intra-row weeds due to the lack of rapid and precise weed/soybean detection and location technology. Rhodamine B (Rh-B) is a systemic crop compound that can be absorbed by soybeans which fluoresces under a specific excitation light. The purpose of this study is to combine systemic crop compounds and computer vision technology for the identification and localization of soybeans in the field. The fluorescence distribution properties of systemic crop compounds in soybeans and their effects on plant growth were explored. The fluorescence was mainly concentrated in soybean cotyledons treated with Rh-B. After a comparison of soybean seedlings treated with nine groups of rhodamine B solutions at different concentrations ranging from 0 to 1440 ppm, the soybeans treated with 180 ppm Rh-B for 24 h received the recommended dosage, resulting in significant fluorescence that did not affect crop growth. Increasing the Rh-B concentration reduced crop biomass, while prolonged treatment times reduced seed germination. The fluorescence produced lasted for 20 days, ensuring a stable signal in the early stages of growth. Additionally, a precise inter-row soybean plant location system based on a fluorescence imaging system with a 96.7% identification accuracy, determined on 300 datasets, was proposed. This article further confirms the potential of crop signaling technology to assist machines in achieving crop identification and localization in the field.
Full article
(This article belongs to the Section Smart Agriculture)
Open AccessArticle
Temperature-Switch-Controlled Second Harmonic Mode Sensor for Brain-Tissue Detection
by
Xiang Li, Cheng Yang, Chuming Guo, Qijuan Li, Chuan Peng and Haifeng Zhang
Sensors 2024, 24(10), 3065; https://doi.org/10.3390/s24103065 (registering DOI) - 11 May 2024
Abstract
Identifying brain-tissue types holds significant research value in the biomedical field of non-contact brain-tissue measurement applications. In this paper, a layered metastructure is proposed, and the second harmonic generation (SHG) in a multilayer metastructure is derived using the transfer matrix method. With the
[...] Read more.
Identifying brain-tissue types holds significant research value in the biomedical field of non-contact brain-tissue measurement applications. In this paper, a layered metastructure is proposed, and the second harmonic generation (SHG) in a multilayer metastructure is derived using the transfer matrix method. With the SHG conversion efficiency (CE) as the measurement signal, the refractive index ranges that can be distinguished are 1.23~1.31 refractive index unit (RIU) and 1.38~1.44 RIU, with sensitivities of 0.8597 RIU−1 and 1.2967 RIU−1, respectively. It can distinguish various brain tissues, including gray matter, white matter, and low-grade glioma, achieving the function of a second harmonic mode sensor (SHMS). Furthermore, temperature has a significant impact on the SHG CE, which can be used to define the switch signal indicating whether the SHMS is functioning properly. When the temperature range is 291.4~307.9 Kelvin (K), the temperature switch is in the “open” state, and the optimal SHG CE is higher than 0.298%, indicating that the SHMS is in the working state. For other temperature ranges, the SHG CE will decrease significantly, indicating that the temperature switch is in the “off” state, and the SHMS is not working. By stimulating temperature and using the response of SHG CE, the temperature-switch function is achieved, providing a new approach for temperature-controlled second harmonic detection.
Full article
(This article belongs to the Section Physical Sensors)
►▼
Show Figures
Figure 1
Open AccessReview
Comprehensive Investigation of Unmanned Aerial Vehicles (UAVs): An In-Depth Analysis of Avionics Systems
by
Khaled Osmani and Detlef Schulz
Sensors 2024, 24(10), 3064; https://doi.org/10.3390/s24103064 (registering DOI) - 11 May 2024
Abstract
The evolving technologies regarding Unmanned Aerial Vehicles (UAVs) have led to their extended applicability in diverse domains, including surveillance, commerce, military, and smart electric grid monitoring. Modern UAV avionics enable precise aircraft operations through autonomous navigation, obstacle identification, and collision prevention. The structures
[...] Read more.
The evolving technologies regarding Unmanned Aerial Vehicles (UAVs) have led to their extended applicability in diverse domains, including surveillance, commerce, military, and smart electric grid monitoring. Modern UAV avionics enable precise aircraft operations through autonomous navigation, obstacle identification, and collision prevention. The structures of avionics are generally complex, and thorough hierarchies and intricate connections exist in between. For a comprehensive understanding of a UAV design, this paper aims to assess and critically review the purpose-classified electronics hardware inside UAVs, each with the corresponding performance metrics thoroughly analyzed. This review includes an exploration of different algorithms used for data processing, flight control, surveillance, navigation, protection, and communication. Consequently, this paper enriches the knowledge base of UAVs, offering an informative background on various UAV design processes, particularly those related to electric smart grid applications. As a future work recommendation, an actual relevant project is openly discussed.
Full article
(This article belongs to the Section Intelligent Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
DSOMF: A Dynamic Environment Simultaneous Localization and Mapping Technique Based on Machine Learning
by
Shengzhe Yue, Zhengjie Wang and Xiaoning Zhang
Sensors 2024, 24(10), 3063; https://doi.org/10.3390/s24103063 (registering DOI) - 11 May 2024
Abstract
To address the challenges of reduced localization accuracy and incomplete map construction demonstrated using classical semantic simultaneous localization and mapping (SLAM) algorithms in dynamic environments, this study introduces a dynamic scene SLAM technique that builds upon direct sparse odometry (DSO) and incorporates instance
[...] Read more.
To address the challenges of reduced localization accuracy and incomplete map construction demonstrated using classical semantic simultaneous localization and mapping (SLAM) algorithms in dynamic environments, this study introduces a dynamic scene SLAM technique that builds upon direct sparse odometry (DSO) and incorporates instance segmentation and video completion algorithms. While prioritizing the algorithm’s real-time performance, we leverage the rapid matching capabilities of Direct Sparse Odometry (DSO) to link identical dynamic objects in consecutive frames. This association is achieved through merging semantic and geometric data, thereby enhancing the matching accuracy during image tracking through the inclusion of semantic probability. Furthermore, we incorporate a loop closure module based on video inpainting algorithms into our mapping thread. This allows our algorithm to rely on the completed static background for loop closure detection, further enhancing the localization accuracy of our algorithm. The efficacy of this approach is validated using the TUM and KITTI public datasets and the unmanned platform experiment. Experimental results show that, in various dynamic scenes, our method achieves an improvement exceeding 85% in terms of localization accuracy compared with the DSO system.
Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City, 2nd Volume)
Journal Menu
► ▼ Journal Menu-
- Sensors Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 24 (2024)
- Vol. 23 (2023)
- Vol. 22 (2022)
- Vol. 21 (2021)
- Vol. 20 (2020)
- Vol. 19 (2019)
- Vol. 18 (2018)
- Vol. 17 (2017)
- Vol. 16 (2016)
- Vol. 15 (2015)
- Vol. 14 (2014)
- Vol. 13 (2013)
- Vol. 12 (2012)
- Vol. 11 (2011)
- Vol. 10 (2010)
- Vol. 9 (2009)
- Vol. 8 (2008)
- Vol. 7 (2007)
- Vol. 6 (2006)
- Vol. 5 (2005)
- Vol. 4 (2004)
- Vol. 3 (2003)
- Vol. 2 (2002)
- Vol. 1 (2001)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Materials, Nanomaterials, Photonics, Polymers, Applied Sciences, Sensors
Optical and Optoelectronic Properties of Materials and Their Applications
Topic Editors: Zhiping Luo, Gibin George, Navadeep ShrivastavaDeadline: 20 May 2024
Topic in
Remote Sensing, Sensors, Smart Cities, Vehicles, Geomatics
Information Sensing Technology for Intelligent/Driverless Vehicle, 2nd Volume
Topic Editors: Yan Huang, Yi Ren, Penghui Huang, Jun Wan, Zhanye Chen, Shiyang TangDeadline: 31 May 2024
Topic in
Applied Sciences, Electricity, Electronics, Energies, Sensors
Power System Protection
Topic Editors: Seyed Morteza Alizadeh, Akhtar KalamDeadline: 20 June 2024
Topic in
Applied Sciences, Energies, Machines, Sensors, Vehicles
Vehicle Dynamics and Control
Topic Editors: Peter Gaspar, Junnian WangDeadline: 30 June 2024
Conferences
Special Issues
Special Issue in
Sensors
Meta-User Interfaces for Ambient Environments
Guest Editors: Marco Romano, Phillip C-Y. Sheu, Giuliana VitielloDeadline: 20 May 2024
Special Issue in
Sensors
Selected Papers from 20th World Conference on Non-Destructive Testing (WCNDT 2022)
Guest Editor: Seunghee ParkDeadline: 31 May 2024
Special Issue in
Sensors
Novel Sensors and Algorithms for Outdoor Mobile Robot
Guest Editors: Levente Tamás, Andras MajdikDeadline: 20 June 2024
Special Issue in
Sensors
Deep Learning Methods for Human Activity Recognition and Emotion Detection
Guest Editor: Mario Munoz-OrganeroDeadline: 30 June 2024
Topical Collections
Topical Collection in
Sensors
Robotic and Sensor Technologies in Environmental Exploration and Monitoring
Collection Editors: Jacopo Aguzzi, Corrado Costa, Sergio Stefanni, Valerio Funari
Topical Collection in
Sensors
Microfluidic Sensors
Collection Editors: Sabina Merlo, Klaus Stefan Drese