Previous Issue
Volume 25, August-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 25, Issue 16 (August-2 2025) – 120 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 459 KiB  
Article
Direct Experimental Demonstration of Bend-Induced Transformation of Magnetic Structure in Amorphous Microwires
by Alexander Chizhik, Valentina Zhukova and Arcady Zhukov
Sensors 2025, 25(16), 5000; https://doi.org/10.3390/s25165000 - 12 Aug 2025
Abstract
In the pursuit of active elements for bending and curvature sensors, magneto-optical investigations were performed on bent microwires. For the first time, local surface magnetization reversal curves were obtained from various sides of bent Co-rich and Fe-rich microwires. The observed differences in surface [...] Read more.
In the pursuit of active elements for bending and curvature sensors, magneto-optical investigations were performed on bent microwires. For the first time, local surface magnetization reversal curves were obtained from various sides of bent Co-rich and Fe-rich microwires. The observed differences in surface magnetization reversal behavior are directly attributed to the transverse distribution of internal mechanical stresses, which range from maximum tensile stress on the outer side of the bent sample to maximum compressive stress on the inner side. Depending on the sample composition and the nature of local stress, distinct magnetic structures—axial, elliptical, and spiral—were identified in different locations on the surface of the microwire. These findings provide valuable insights into the operational mechanisms of bending-sensitive magnetic sensors. Full article
17 pages, 1230 KiB  
Article
Biomechanical Effects of a Passive Lower-Limb Exoskeleton Designed for Half-Sitting Work Support on Walking
by Qian Li, Naoto Haraguchi, Bian Yoshimura, Sentong Wang, Makoto Yoshida and Kazunori Hase
Sensors 2025, 25(16), 4999; https://doi.org/10.3390/s25164999 - 12 Aug 2025
Abstract
The half-sitting posture is essential for many functional tasks performed by industrial workers. Thus, passive lower-limb exoskeletons, known as wearable chairs, are increasingly used to relieve lower-limb loading in such scenarios. However, although these devices lighten muscle effort during half-sitting tasks, they can [...] Read more.
The half-sitting posture is essential for many functional tasks performed by industrial workers. Thus, passive lower-limb exoskeletons, known as wearable chairs, are increasingly used to relieve lower-limb loading in such scenarios. However, although these devices lighten muscle effort during half-sitting tasks, they can disrupt walking mechanics and balance. Moreover, rigorous biomechanical data on joint moments and contact forces during walking with such a device remain scarce. Therefore, this study conducted a biomechanical evaluation of level walking with a wearable chair to quantify its effects on gait and joint loading. Participants performed walking experiments with and without the wearable chair. An optical motion capture system and force plates collected kinematic and ground reaction data. Six-axis force sensors measured contact forces and moments. These measurements were fed into a Newton–Euler inverse dynamics model to estimate lower-limb joint moments and assess joint loading. The contact measurements showed that nearly all rotational load was absorbed at the thigh attachment, while the ankle attachment served mainly as a positional guide with minimal moment transfer. The inverse dynamics analysis revealed that the wearable chair introduced unintended rotational stresses at lower-limb joints, potentially elevating musculoskeletal risk. This detailed biomechanical evidence underpins targeted design refinements to redistribute loads and better protect lower-limb joints. Full article
17 pages, 2347 KiB  
Article
Fuzzy Logic-Based Adaptive Filtering for Transfer Alignment
by Zhaohui Gao, Jiahui Yang, Chengfan Gu and Yongmin Zhong
Sensors 2025, 25(16), 4998; https://doi.org/10.3390/s25164998 - 12 Aug 2025
Abstract
The transfer alignment of strapdown inertial navigation systems (SINSs) is of great significance for improving the strike accuracy of airborne tactical vehicles. This study designed a new fuzzy logic-based adaptive filtering method by using the fuzzy logic theory to address the influence of [...] Read more.
The transfer alignment of strapdown inertial navigation systems (SINSs) is of great significance for improving the strike accuracy of airborne tactical vehicles. This study designed a new fuzzy logic-based adaptive filtering method by using the fuzzy logic theory to address the influence of system model error on the state estimation of the Kalman filter for SINS transfer alignment. It established the state error model and measurement error model, which were embedded with the state prediction residual and measurement residual, respectively, for SINS transfer alignment. The fuzzy rules were designed and introduced into the Kalman filtering framework to estimate the covariances of the system measurement and predicted state by minimizing their residuals to improve filtering accuracy for SINS transfer alignment. Simulation and experimentation together with associated comparative analyses were conducted, demonstrating that the proposed method can effectively handle the influence of system model error on SINS transfer alignment, and its accuracy is at least 18.83% higher than benchmark methods for transfer alignment. Full article
(This article belongs to the Special Issue New Challenges and Sensor Techniques in Robot Positioning)
Show Figures

Figure 1

20 pages, 6223 KiB  
Article
A Deep Learning-Based Machine Vision System for Online Monitoring and Quality Evaluation During Multi-Layer Multi-Pass Welding
by Van Doi Truong, Yunfeng Wang, Chanhee Won and Jonghun Yoon
Sensors 2025, 25(16), 4997; https://doi.org/10.3390/s25164997 - 12 Aug 2025
Abstract
Multi-layer multi-pass welding plays an important role in manufacturing industries such as nuclear power plants, pressure vessel manufacturing, and ship building. However, distortion or welding defects are still challenges; therefore, welding monitoring and quality control are essential tasks for the dynamic adjustment of [...] Read more.
Multi-layer multi-pass welding plays an important role in manufacturing industries such as nuclear power plants, pressure vessel manufacturing, and ship building. However, distortion or welding defects are still challenges; therefore, welding monitoring and quality control are essential tasks for the dynamic adjustment of execution during welding. The aim was to propose a machine vision system for monitoring and surface quality evaluation during multi-pass welding using a line scanner and infrared camera sensors. The cross-section modelling based on the line scanner data enabled the measurement of distortion and dynamic control of the welding plan. Lack of fusion, porosity, and burn-through defects were intentionally generated by controlling welding parameters to construct a defect inspection dataset. To reduce the influence of material surface colour, the proposed normal map approach combined with a deep learning approach was applied for inspecting the surface defects on each layer, achieving a mean average precision of 0.88. In addition to monitoring the temperature of the weld pool, a burn-through defect detection algorithm was introduced to track welding status. The whole system was integrated into a graphical user interface to visualize the welding progress. This work provides a solid foundation for monitoring and potential for the further development of the automatic adaptive welding system in multi-layer multi-pass welding. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
24 pages, 1333 KiB  
Article
Energy-Efficient Resource Allocation Scheme Based on Reinforcement Learning in Distributed LoRa Networks
by Ryota Ariyoshi, Aohan Li, Mikio Hasegawa and Tomoaki Ohtsuki
Sensors 2025, 25(16), 4996; https://doi.org/10.3390/s25164996 - 12 Aug 2025
Abstract
The rapid growth of Long Range (LoRa) devices has led to network congestion, reducing spectrum and energy efficiency. To address this problem, we propose an energy-efficient reinforcement learning method for distributed LoRa networks, enabling each device to independently select appropriate transmission parameters, i.e., [...] Read more.
The rapid growth of Long Range (LoRa) devices has led to network congestion, reducing spectrum and energy efficiency. To address this problem, we propose an energy-efficient reinforcement learning method for distributed LoRa networks, enabling each device to independently select appropriate transmission parameters, i.e., channel, transmission power (TP), and bandwidth (BW) based on acknowledgment (ACK) feedback and energy consumption. Our method employs the Upper Confidence Bound (UCB)1-tuned algorithm and incorporates energy metrics into the reward function, achieving lower power consumption and high transmission success rates. Designed to be lightweight for resource-constrained IoT devices, it was implemented on real LoRa hardware and tested in dense network scenarios. Experimental results show that the proposed method outperforms fixed allocation, adaptive data rate low-complexity (ADR-Lite), and ϵ-greedy methods in both transmission success rate and energy efficiency. Full article
(This article belongs to the Section Internet of Things)
10 pages, 511 KiB  
Article
Asymmetries of Force and Power During Single-Leg Counter Movement Jump in Young Adult Females and Males
by Jarosław Kabaciński, Joanna Gorwa, Waldemar Krakowiak and Michał Murawa
Sensors 2025, 25(16), 4995; https://doi.org/10.3390/s25164995 - 12 Aug 2025
Abstract
Background/Objectives: Inter-limb asymmetry of a given variable for vertical jumps is commonly assessed in both healthy individuals and those undergoing rehabilitation post-injury. The aim of this study was to compare the asymmetry index between the take-off and landing of a single-leg counter movement [...] Read more.
Background/Objectives: Inter-limb asymmetry of a given variable for vertical jumps is commonly assessed in both healthy individuals and those undergoing rehabilitation post-injury. The aim of this study was to compare the asymmetry index between the take-off and landing of a single-leg counter movement jump (CMJ), as well as between females and males. Methods: Twenty-three healthy females (age: 21.5 ± 1.6 years) and twenty-three healthy males (age: 21.1 ± 1.8 years) participated in this study. The assessment of two asymmetry indices (AI1 and AI2) was conducted for the peak vertical ground reaction force (PVGRF) and maximum power (MP) during single-leg CMJ take-offs and landings performed on the force platform. Results: The analysis showed significant main effects (p < 0.001) for the phase factor (only AI2) and for the gender factor (only AI1). Moreover, there was a non-significant interaction effect between the phase factor and gender factor (p = 0.476). Pairwise comparisons revealed significant differences in the values of (1) AI2 between the take-off and landing (p < 0.001) and (2) AI1 between females and males (p < 0.001). Conclusions: Findings showed significant effects of the phase factor (only for AI2) and gender factor (only for AI1) on the magnitude of inter-limb asymmetry during single-leg CMJs. Furthermore, this study reported the significantly higher asymmetry of the PVGRF and MP for landing than take-off, which may result from difficulties in controlling the jumper’s landing technique on one foot at higher velocity. In addition, the assessment of asymmetry for single-leg CMJs using AI1 should be performed separately for females and males, as opposed to AI2. Participants of both genders generally demonstrated a higher AI level for the power than for the force. Full article
(This article belongs to the Special Issue Sensors and Data Analysis for Biomechanics and Physical Activity)
21 pages, 806 KiB  
Review
A Frontier Review of Semantic SLAM Technologies Applied to the Open World
by Le Miao, Wen Liu and Zhongliang Deng
Sensors 2025, 25(16), 4994; https://doi.org/10.3390/s25164994 - 12 Aug 2025
Abstract
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift [...] Read more.
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift toward open-world semantic scene understanding in SLAM and provides a comprehensive review of the technological evolution from closed-world assumptions to open-world frameworks. We survey the current state of research in open-world semantic SLAM, highlighting key challenges and frontiers. In particular, we conduct an in-depth analysis of three critical areas: zero-shot open-vocabulary understanding, dynamic semantic expansion, and multimodal semantic fusion. These capabilities are examined for their crucial roles in unknown class identification, incremental semantic updates, and multisensor perceptual integration. Our main contribution is presenting the first systematic algorithmic benchmarking and performance comparison of representative open-world semantic SLAM systems, revealing the potential of these core techniques to enhance semantic understanding in complex environments. Finally, we propose several promising directions for future research, including lightweight model deployment, real-time performance optimization, and collaborative multimodal perception, and offering a systematic reference and methodological guidance for continued advancements in this emerging field. Full article
(This article belongs to the Section Sensors and Robotics)
26 pages, 6731 KiB  
Article
Deep Ensemble Learning Based on Multi-Form Fusion in Gearbox Fault Recognition
by Xianghui Meng, Qingfeng Wang, Chunbao Shi, Qiang Zeng, Yongxiang Zhang, Wanhao Zhang and Yinjun Wang
Sensors 2025, 25(16), 4993; https://doi.org/10.3390/s25164993 - 12 Aug 2025
Abstract
Considering the problems of having insufficient fault identification from single information sources in actual industrial environments, and different information sensitivity in multi-information source data, and different sensitivity of artificial feature extraction, which can lead to difficulties of effective fusion of equipment information, insufficient [...] Read more.
Considering the problems of having insufficient fault identification from single information sources in actual industrial environments, and different information sensitivity in multi-information source data, and different sensitivity of artificial feature extraction, which can lead to difficulties of effective fusion of equipment information, insufficient state representation ability, low fault identification accuracy, and poor robustness, a multi-information fusion fault identification network model based on deep ensemble learning is proposed. The network is composed of multiple sub-feature extraction units and feature fusion units. Firstly, the fault feature mapping information of each information source is extracted and stored in different sub-models, and then, the features of each sub-model are fused by the feature fusion unit. Finally, the fault recognition results are obtained. The effectiveness of the proposed method is evaluated by using two gearbox datasets. Compared with the method of simple stacking fusion and single measuring point without fusion, the accuracy of each type of fault recognition of the proposed method is close to 100%. The results show that the proposed method is feasible and effective in the application of gearbox fault recognition. Full article
(This article belongs to the Special Issue Applications of Sensors in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

25 pages, 5194 KiB  
Article
A Graph-Based Superpixel Segmentation Approach Applied to Pansharpening
by Hind Hallabia
Sensors 2025, 25(16), 4992; https://doi.org/10.3390/s25164992 - 12 Aug 2025
Abstract
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral [...] Read more.
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral (MS) image to generate a unique comprehensive high-resolution MS image. As the performance of such a fusion method relies on the choice of the fusion strategy, and in particular, on the way the algorithm is used for estimating gain coefficients, our proposal is dedicated to computing the injection gains over a graph-driven segmentation map. The graph-based segments are obtained by applying simple linear iterative clustering (SLIC) on the MS image followed by a region adjacency graph (RAG) merging stage. This graphical representation of the segmentation map is used as guidance for spatial information to be injected during fusion processing. The high-resolution MS image is achieved by inferring locally the details in accordance with the local simplex injection fusion rule. The quality improvements achievable by our proposal are evaluated and validated at reduced and at full scales using two high resolution datasets collected by GeoEye-1 and WorldView-3 sensors. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

35 pages, 1234 KiB  
Review
Deep Learning-Based Fusion of Optical, Radar, and LiDAR Data for Advancing Land Monitoring
by Yizhe Li and Xinqing Xiao
Sensors 2025, 25(16), 4991; https://doi.org/10.3390/s25164991 - 12 Aug 2025
Abstract
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or [...] Read more.
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or limited spectral information (LiDAR) often hinder comprehensive and robust characterization of land surfaces. Recent advancements in synergistic harmonization technology for land monitoring, along with enhanced signal processing techniques and the integration of machine learning algorithms, have significantly broadened the scope and depth of geosciences. Therefore, it is essential to summarize the comprehensive applications of synergistic harmonization technology for geosciences, with a particular focus on recent advancements. Most of the existing review papers focus on the application of a single technology in a specific area, highlighting the need for a comprehensive review that integrates synergistic harmonization technology. This review provides a comprehensive review of advancements in land monitoring achieved through the synergistic harmonization of optical, radar, and LiDAR satellite technologies. It details the unique strengths and weaknesses of each sensor type, highlighting how their integration overcomes individual limitations by leveraging complementary information. This review analyzes current data harmonization and preprocessing techniques, various data fusion levels, and the transformative role of machine learning and deep learning algorithms, including emerging foundation models. Key applications across diverse domains such as land cover/land use mapping, change detection, forest monitoring, urban monitoring, agricultural monitoring, and natural hazard assessment are discussed, demonstrating enhanced accuracy and scope. Finally, this review identifies persistent challenges such as technical complexities in data integration, issues with data availability and accessibility, validation hurdles, and the need for standardization. It proposes future research directions focusing on advanced AI, novel fusion techniques, improved data infrastructure, integrated “space–air–ground” systems, and interdisciplinary collaboration to realize the full potential of multi-sensor satellite data for robust and timely land surface monitoring. Supported by deep learning, this synergy will improve our ability to monitor land surface conditions more accurately and reliably. Full article
17 pages, 6208 KiB  
Article
Sweet—An Open Source Modular Platform for Contactless Hand Vascular Biometric Experiments
by David Geissbühler, Sushil Bhattacharjee, Ketan Kotwal, Guillaume Clivaz and Sébastien Marcel
Sensors 2025, 25(16), 4990; https://doi.org/10.3390/s25164990 - 12 Aug 2025
Abstract
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can [...] Read more.
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can be used for hand vascular biometrics studies (wrist, palm, and finger-vein) and surface features such as palmprint. It supports several acquisition modalities such as multi-spectral Near-Infrared (NIR), RGB-color, Stereo Vision (SV) and Photometric Stereo (PS). Using this platform we collected a dataset consisting of the fingers, palm and wrist vascular data of 120 subjects. We present biometric experimental results, focusing on Finger-Vein Recognition (FVR). Finally, we discuss fusion of multiple modalities. The acquisition software, parts of the hardware design, the new FV dataset, as well as source-code for our experiments are publicly available for research purposes. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

22 pages, 3920 KiB  
Article
Integrating Cortical Source Reconstruction and Adversarial Learning for EEG Classification
by Yue Guo, Yan Pei, Rong Yao, Yueming Yan, Meirong Song and Haifang Li
Sensors 2025, 25(16), 4989; https://doi.org/10.3390/s25164989 - 12 Aug 2025
Abstract
Existing methods for diagnosing depression rely heavily on subjective evaluations, whereas electroencephalography (EEG) emerges as a promising approach for objective diagnosis due to its non-invasiveness, low cost, and high temporal resolution. However, current EEG analysis methods are constrained by volume conduction effect and [...] Read more.
Existing methods for diagnosing depression rely heavily on subjective evaluations, whereas electroencephalography (EEG) emerges as a promising approach for objective diagnosis due to its non-invasiveness, low cost, and high temporal resolution. However, current EEG analysis methods are constrained by volume conduction effect and class imbalance, both of which adversely affect classification performance. To address these issues, this paper proposes a multi-stage deep learning model for EEG-based depression classification, integrating a cortical feature extraction strategy (CFE), a feature attention module (FA), a graph convolutional network (GCN), and a focal adversarial domain adaptation module (FADA). Specifically, the CFE strategy reconstructs brain cortical signals using the standardized low-resolution brain electromagnetic tomography (sLORETA) algorithm and extracts both linear and nonlinear features that capture cortical activity variations. The FA module enhances feature representation through a multi-head self-attention mechanism, effectively capturing spatiotemporal relationships across distinct brain regions. Subsequently, the GCN further extracts spatiotemporal EEG features by modeling functional connectivity between brain regions. The FADA module employs Focal Loss and Gradient Reversal Layer (GRL) mechanisms to suppress domain-specific information, alleviate class imbalance, and enhance intra-class sample aggregation. Experimental validation on the publicly available PRED+CT dataset demonstrates that the proposed model achieves a classification accuracy of 85.33%, outperforming current state-of-the-art methods by 2.16%. These results suggest that the proposed model holds strong potential for improving the accuracy and reliability of EEG-based depression classification. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

26 pages, 10272 KiB  
Article
Research on Disaster Environment Map Fusion Construction and Reinforcement Learning Navigation Technology Based on Air–Ground Collaborative Multi-Heterogeneous Robot Systems
by Hongtao Tao, Wen Zhao, Li Zhao and Junlong Wang
Sensors 2025, 25(16), 4988; https://doi.org/10.3390/s25164988 - 12 Aug 2025
Abstract
The primary challenge that robots face in disaster rescue is to precisely and efficiently construct disaster maps and achieve autonomous navigation. This paper proposes a method for air–ground collaborative map construction. It utilizes the flight capability of an unmanned aerial vehicle (UAV) to [...] Read more.
The primary challenge that robots face in disaster rescue is to precisely and efficiently construct disaster maps and achieve autonomous navigation. This paper proposes a method for air–ground collaborative map construction. It utilizes the flight capability of an unmanned aerial vehicle (UAV) to achieve rapid three-dimensional space coverage and complex terrain crossing for rapid and efficient map construction. Meanwhile, it utilizes the stable operation capability of an unmanned ground vehicle (UGV) and the ground detail survey capability to achieve precise map construction. The maps constructed by the two are accurately integrated to obtain precise disaster environment maps. Among them, the map construction and positioning technology is based on the FAST LiDAR–inertial odometry 2 (FAST-LIO2) framework, enabling the robot to achieve precise positioning even in complex environments, thereby obtaining more accurate point cloud maps. Before conducting map fusion, the point cloud is preprocessed first to reduce the density of the point cloud and also minimize the interference of noise and outliers. Subsequently, the coarse and fine registrations of the point clouds are carried out in sequence. The coarse registration is used to reduce the initial pose difference of the two point clouds, which is conducive to the subsequent rapid and efficient fine registration. The coarse registration uses the improved sample consensus initial alignment (SAC-IA) algorithm, which significantly reduces the registration time compared with the traditional SAC-IA algorithm. The precise registration uses the voxelized generalized iterative closest point (VGICP) algorithm. It has a faster registration speed compared with the generalized iterative closest point (GICP) algorithm while ensuring accuracy. In reinforcement learning navigation, we adopted the deep deterministic policy gradient (DDPG) path planning algorithm. Compared with the deep Q-network (DQN) algorithm and the A* algorithm, the DDPG algorithm is more conducive to the robot choosing a better route in a complex and unknown environment, and at the same time, the motion trajectory is smoother. This paper adopts Gazebo simulation. Compared with physical robot operation, it provides a safe, controllable, and cost-effective environment, supports efficient large-scale experiments and algorithm debugging, and also supports flexible sensor simulation and automated verification, thereby optimizing the overall testing process. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 2197 KiB  
Article
In-Field Performance Evaluation of an IoT Monitoring System for Fine Particulate Matter in Livestock Buildings
by Provvidenza Rita D’Urso, Alice Finocchiaro, Grazia Cinardi and Claudia Arcidiacono
Sensors 2025, 25(16), 4987; https://doi.org/10.3390/s25164987 - 12 Aug 2025
Abstract
The livestock sector significantly contributes to atmospheric emissions of various pollutants, such as ammonia (NH3) and particulate matter of diameter under 2.5 µm (PM2.5) from activity and barn management. The objective of this study was to evaluate the reliability of low-cost [...] Read more.
The livestock sector significantly contributes to atmospheric emissions of various pollutants, such as ammonia (NH3) and particulate matter of diameter under 2.5 µm (PM2.5) from activity and barn management. The objective of this study was to evaluate the reliability of low-cost sensors integrated with an IoT system for monitoring PM2.5 concentrations in a dairy barn. To this end, data acquired by a PM2.5 measurement device has been validated by using a high-precision one. Results demonstrated that the performances of low-cost sensors were highly correlated with temperature and humidity parameters recorded in its own IoT platform. Therefore, a parameter-based adjustment methodology is proposed. As a result of the statistical assessments conducted on this data, it has been demonstrated that the analysed sensor, when corrected using the proposed correction model, is an effective device for the purpose of monitoring the mean daily levels of PM2.5 within the barn. Although the model was developed and validated by using data collected from a dairy barn, the proposed methodology can be applied to these sensors in similar environments. Implementing reliable and affordable monitoring systems for key pollutants is crucial to enable effective mitigation strategies. Due to their low cost, ease of transport, and straightforward installation, these sensors can be used in multiple locations within a barn or moved between different barns for flexible and widespread air quality monitoring applications in livestock barns. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

29 pages, 1563 KiB  
Review
3D Printing in the Design of Potentiometric Sensors: A Review of Techniques, Materials, and Applications
by Aleksandra Zalewska, Nikola Lenar and Beata Paczosa-Bator
Sensors 2025, 25(16), 4986; https://doi.org/10.3390/s25164986 - 12 Aug 2025
Abstract
The integration of 3D printing into the development of potentiometric sensors has revolutionized sensor fabrication by enabling customizable, low-cost, and rapid prototyping of analytical devices. Techniques like fused deposition modeling (FDM) and stereolithography (SLA) allow researchers to produce different sensor parts, such as [...] Read more.
The integration of 3D printing into the development of potentiometric sensors has revolutionized sensor fabrication by enabling customizable, low-cost, and rapid prototyping of analytical devices. Techniques like fused deposition modeling (FDM) and stereolithography (SLA) allow researchers to produce different sensor parts, such as electrode housings, solid contacts, reference electrodes, and even microfluidic systems. This review explains the basic principles of potentiometric sensors and shows how 3D printing helps solve problems faced in traditional sensor manufacturing. Benefits include smaller size, flexible shapes, the use of different materials in one print, and quick production of working prototypes. However, some challenges still exist—like differences between prints, limited chemical resistance of some materials, and the long-term stability of sensors in real-world conditions. This paper overviews recent examples of 3D-printed ion-selective electrodes and related components and discusses new ideas to improve their performance. It also points to future directions, such as better materials and combining different manufacturing methods. Overall, 3D printing is a powerful and growing tool for developing the next generation of potentiometric sensors for use in healthcare, environmental monitoring, and industry. Full article
(This article belongs to the Special Issue 3D Printed Sensors: Innovations and Applications)
27 pages, 1946 KiB  
Article
Retrieving Proton Beam Information Using Stitching-Based Detector Technique and Intelligent Reconstruction Algorithms
by Chi-Wen Hsieh, Hong-Liang Chang, Yi-Hsiang Huang, Ming-Che Lee and Yu-Jen Wang
Sensors 2025, 25(16), 4985; https://doi.org/10.3390/s25164985 - 12 Aug 2025
Abstract
In view of the great need for quality assurance in radiotherapy, this paper proposes a stitching-based detector (SBD) technique and a set of intelligent algorithms that can reconstruct the information of projected particle beams. The reconstructed information includes the intensity, sigma value, and [...] Read more.
In view of the great need for quality assurance in radiotherapy, this paper proposes a stitching-based detector (SBD) technique and a set of intelligent algorithms that can reconstruct the information of projected particle beams. The reconstructed information includes the intensity, sigma value, and location of the maximum intensity of the beam under test. To verify the effectiveness of the proposed technique and algorithms, this research study adopts the pencil beam scanning (PBS) form of proton beam therapy (PBT) as an example. Through the SBD technique, it is possible to utilize 128 × 128 ionization chambers, which constitute an ionization plate of 25.6 cm2, with an acceptable number of 4096 analog-to-digital converters (ADCs) and a resolution of 0.25 mm. Through simulation, the proposed SBD technique and intelligent algorithms are proven to exhibit satisfactory and practical performance. By using two kinds of maximum intensity definitions, sigma values ranging from 10 to 120, and two definitions in an erroneous case, the maximum error rate is found to be 3.95%, which is satisfactorily low. Through analysis, this research study discovers that most errors occur near the symmetrical and peripheral boundaries. Furthermore, lower sigma values tend to aggravate the error rate because the beam becomes more like an ideal particle, which leads to greater imprecision caused by symmetrical sensor structures as its sigma is reduced. However, because proton beams are normally not projected onto the border region of the sensed area, the error rate in practice can be expected to be even lower. Although this research study adopts PBS PBT as an example, the proposed SBD technique and intelligent algorithms are applicable to any type of particle beam reconstruction in the field of radiotherapy, as long as the particles under analysis follow a Gaussian distribution. Full article
(This article belongs to the Section Biomedical Sensors)
25 pages, 11706 KiB  
Article
Optimization of Sparse Sensor Layouts and Data-Driven Reconstruction Methods for Steady-State and Transient Thermal Field Inverse Problems
by Qingyang Yuan, Peijun Yao, Wenjun Zhao and Bo Zhang
Sensors 2025, 25(16), 4984; https://doi.org/10.3390/s25164984 - 12 Aug 2025
Abstract
This paper investigates the inverse reconstruction of temperature fields under both steady-state and transient heat conduction scenarios. The central contribution lies in the structured development and validation of the Gappy Clustering-based Proper Orthogonal Decomposition (Gappy C-POD) method—an approach that, despite its conceptual origin [...] Read more.
This paper investigates the inverse reconstruction of temperature fields under both steady-state and transient heat conduction scenarios. The central contribution lies in the structured development and validation of the Gappy Clustering-based Proper Orthogonal Decomposition (Gappy C-POD) method—an approach that, despite its conceptual origin alongside the clustering-based dimensionality reduction method guided by POD structures (C-POD), had previously lacked an explicit algorithmic framework or experimental validation. To this end, the study constructs a comprehensive solution framework that integrates sparse sensor layout optimization with data-driven field reconstruction techniques. Numerical models incorporating multiple internal heat sources and heterogeneous boundary conditions are solved using the finite difference method. Multiple sensor layout strategies—including random selection, S-OPT, the Correlation Coefficient Filtering Method (CCFM), and uniform sampling—are evaluated in conjunction with database generation techniques such as Latin Hypercube sampling, Sobol sequences, and maximum–minimum distance sampling. The experimental results demonstrate that both Gappy POD and Gappy C-POD exhibit strong robustness in low-modal scenarios (1–5 modes), with Gappy C-POD—when combined with the CCFM and maximum distance sampling—achieving the best reconstruction stability. In contrast, while POD-MLP and POD-RBF perform well at higher modal numbers (>10), they show increased sensitivity to sensor configuration and sample size. This research not only introduces the first complete implementation of the Gappy C-POD methodology but also provides a systematic evaluation of reconstruction performance across diverse sensor placement strategies and reconstruction algorithms. The results offer novel methodological insights into the integration of data-driven modeling and sensor network design for solving inverse temperature field problems in complex thermal environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

21 pages, 8520 KiB  
Article
MythPose: Enhanced Detection of Complex Poses in Thangka Figures
by Yukai Xian, Te Shen, Yurui Lee, Ping Lan, Qijun Zhao and Liang Yan
Sensors 2025, 25(16), 4983; https://doi.org/10.3390/s25164983 - 12 Aug 2025
Abstract
Thangka is a unique form of painting in Tibet, which holds rich cultural significance and artistic value. In Thangkas, in addition to the standard human form, there are also figures with multiple limbs. Existing human pose estimation methods are not well suited for [...] Read more.
Thangka is a unique form of painting in Tibet, which holds rich cultural significance and artistic value. In Thangkas, in addition to the standard human form, there are also figures with multiple limbs. Existing human pose estimation methods are not well suited for keypoint detection of figures in Thangka paintings. This paper builds upon YOLOv11-Pose and introduces the Mamba structure to enhance the model’s ability to capture global features. A feature fusion module is employed to integrate both shallow and deep features, and a KAL loss function is proposed to alleviate the interference between keypoints of different body parts. In this study, a dataset of 6208 Thangka images is collected and annotated for Thangka keypoint detection, and data augmentation techniques are used to enhance the generalization of the dataset. Experimental results show that MythPose achieves 89.13% mAP@0.5, 92.51% PCK, and 87.22% OKS in human pose estimation tasks on Thangka images, outperforming the baseline model. This research not only provides a reference for the digital preservation of Thangka art but also offers insights for pose estimation tasks in other similar artworks. Full article
(This article belongs to the Section Sensing and Imaging)
21 pages, 6057 KiB  
Article
PFSKANs: A Novel Pixel-Level Feature Selection Model Based on Kolmogorov–Arnold Networks
by Rui Yang, Michael V. Basin, Guangzhe Yao and Hongzheng Zeng
Sensors 2025, 25(16), 4982; https://doi.org/10.3390/s25164982 - 12 Aug 2025
Abstract
Inspired by the interpretability of Kolmogorov–Arnold Networks (KANs), a novel Pixel-level Feature Selection (PFS) model based on KANs (PFSKANs) is proposed as a fundamentally distinct alternative from trainable Convolutional Neural Networks (CNNs) and transformers in the computer vision tasks. We modify the simplification [...] Read more.
Inspired by the interpretability of Kolmogorov–Arnold Networks (KANs), a novel Pixel-level Feature Selection (PFS) model based on KANs (PFSKANs) is proposed as a fundamentally distinct alternative from trainable Convolutional Neural Networks (CNNs) and transformers in the computer vision tasks. We modify the simplification techniques of KANs to detect key pixels with high contribution scores directly at the input image. Specifically, a trainable selection procedure is intuitively visualized and performed only once, since the obtained interpretable pixels can subsequently be identified and dimensionally standardized using the proposed mathematical approach. Experiments on the image classification tasks using the MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets demonstrate that PFSKANs achieve comparable performance to CNNs in terms of accuracy, parameter efficiency, and training time. Full article
Show Figures

Figure 1

20 pages, 5777 KiB  
Review
Particle Imaging Velocimetry with Color-Encoded Illumination: A Review
by Yizhu Wang, Xiaoming He, Yuan Tian, Chang Liu and Depeng Wang
Sensors 2025, 25(16), 4981; https://doi.org/10.3390/s25164981 - 12 Aug 2025
Abstract
High-resolution and three-dimensional measurements at large scales represent a crucial frontier in flow diagnostics. Color-encoded illumination particle imaging velocimetry has emerged as a promising non-contact volumetric measurement technique in recent years. By employing chromatic gradient illumination to excite tracer particles, this method encodes [...] Read more.
High-resolution and three-dimensional measurements at large scales represent a crucial frontier in flow diagnostics. Color-encoded illumination particle imaging velocimetry has emerged as a promising non-contact volumetric measurement technique in recent years. By employing chromatic gradient illumination to excite tracer particles, this method encodes depth information into color signatures, which are then correlated with two-dimensional positional data in images to reconstruct three-dimensional flow fields using a single camera. This review first introduces the fundamental principles of particle image velocimetry/particle tracking velocimetry and chromatic-depth encoding. Subsequently, we categorize color-depth-encoded particle velocimetry methods based on different illumination strategies, including LED-based, projector-based, and laser-based systems, discussing their respective configurations and representative applications. Finally, we summarize the current research progress in color-encoded particle image velocimetry techniques, provide a comparative analysis of their advantages and limitations, and discuss existing challenges along with future development prospects. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 12156 KiB  
Article
Dual-Port Butterfly Slot Antenna for Biosensing Applications
by Marija Milijic, Branka Jokanovic, Miodrag Tasic, Sinisa Jovanovic, Olga Boric-Lubecke and Victor Lubecke
Sensors 2025, 25(16), 4980; https://doi.org/10.3390/s25164980 - 12 Aug 2025
Abstract
This paper presents the novel design of a printed, low-cost, dual-port, and dual-polarized slot antenna for microwave biomedical radars. The butterfly shape of the radiating element, with orthogonally positioned arms, enables simultaneous radiation of both vertically and horizontally polarized waves. The antenna is [...] Read more.
This paper presents the novel design of a printed, low-cost, dual-port, and dual-polarized slot antenna for microwave biomedical radars. The butterfly shape of the radiating element, with orthogonally positioned arms, enables simultaneous radiation of both vertically and horizontally polarized waves. The antenna is intended for full-duplex in-band applications using two mutually isolated antenna ports, with the CPW port on the same side of the substrate as the slot antenna and the microstrip port positioned orthogonally on the other side of the substrate. Those two ports can be used as transmit and receive ports in a radar transceiver, with a port isolation of 25 dB. Thanks to the bow-tie shape of the slots and an additional coupling region between the butterfly arms, there is more flexibility in simultaneous optimization of the resonant frequency and input impedance at both ports, avoiding the need for a complicated matching network that introduces the attenuation and increases antenna dimensions. The advantage of this design is demonstrated through the modeling of an eight-element dual-port linear array with an extremely simple feed network for high-gain biosensing applications. To validate the simulation results, prototypes of the proposed antenna were fabricated and tested. The measured operating band of the antennas spans from 2.35 GHz to 2.55 GHz, with reflection coefficients of less than—10 dB, a maximum gain of 8.5 dBi, and a front-to-back gain ratio that is greater than 15 dB, which is comparable with other published single dual-port slot antennas. This is the simplest proposed dual-port, dual-polarization antenna that enables straightforward scaling to other frequency bands. Full article
(This article belongs to the Special Issue Design and Application of Millimeter-Wave/Microwave Antenna Array)
Show Figures

Figure 1

20 pages, 2513 KiB  
Article
Using Wearable Sensors to Identify Home and Community-Based Movement Using Continuous and Straight Line Stepping Time
by Lauren Gracey-McMinn, David Loudon, Alix Chadwell, Samantha Curtin, Chantel Ostler and Malcolm Granat
Sensors 2025, 25(16), 4979; https://doi.org/10.3390/s25164979 - 12 Aug 2025
Abstract
Objective measurement of community participation is essential for evaluating functional recovery and intervention outcomes in clinical populations, yet current methods rely heavily on subjective self-report measures. This study developed and validated a classification model to distinguish between home- and community-based activities using stepping [...] Read more.
Objective measurement of community participation is essential for evaluating functional recovery and intervention outcomes in clinical populations, yet current methods rely heavily on subjective self-report measures. This study developed and validated a classification model to distinguish between home- and community-based activities using stepping and lying data from activPAL devices. Twenty-four healthy participants wore activPAL 4+ monitors continuously while completing activity diaries over 7 days. A grid search optimisation approach tested threshold combinations for two stepping parameters: straight-line stepping time (SLS) and continuous stepping duration (CSD). The optimal model achieved 93.7% accuracy across 24-h periods using an SLS threshold of 26 s. The model demonstrated high precision with a median difference of just 7 min between the predicted and reported community participation time. Individual variation in model performance highlights the need for validation in diverse clinical cohorts. This represents a methodological advance in objective physical behaviour monitoring, enabling accurate classification of home and community activity from posture data. By identifying not just how much people move but where they move, the model supports more meaningful assessment of functional mobility and community participation. This can enhance clinical decision making, rehabilitation planning, and intervention evaluation. With potential for adoption in clinical pathways and public health policy, this approach addresses a key gap in measuring real-world recovery and independence. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

20 pages, 1864 KiB  
Article
An ML-Based Resource Allocation Scheme for Energy Optimization in 5G NR
by Xiao Yao and Antonio Pérez Yuste
Sensors 2025, 25(16), 4978; https://doi.org/10.3390/s25164978 - 12 Aug 2025
Abstract
This paper proposes a machine learning (ML)-based energy optimization framework for 5G New Radio (5G NR) utilizing a Classification and Regression Tree (CART) algorithm. The methodology implements dynamic cell resource reconfiguration through predictive load forecasting, achieving a 42.3% reduction in energy consumption, while [...] Read more.
This paper proposes a machine learning (ML)-based energy optimization framework for 5G New Radio (5G NR) utilizing a Classification and Regression Tree (CART) algorithm. The methodology implements dynamic cell resource reconfiguration through predictive load forecasting, achieving a 42.3% reduction in energy consumption, while maintaining QoS parameters within 3GPP-specified thresholds. A case study with a network layout made up of an inter-band NR-NR Dual Connectivity (DC) was simulated to quantitatively validate our model. Full article
(This article belongs to the Special Issue AI-Based 5G/6G Communications)
Show Figures

Figure 1

9 pages, 1443 KiB  
Article
Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique
by Nisan Atiya, Amir Shemer, Ariel Schwarz, Yevgeny Beiderman and Yossef Danan
Sensors 2025, 25(16), 4977; https://doi.org/10.3390/s25164977 - 12 Aug 2025
Abstract
Non-invasive diagnostics play a crucial role in medicine, and they ensure both contamination safety and patient comfort. The proposed study integrates hyperspectral imaging with advanced image fusion, enabling non-invasive, diagnostic procedure within tissue. It utilizes near-infrared (NIR) wavelength vision that is suitable for [...] Read more.
Non-invasive diagnostics play a crucial role in medicine, and they ensure both contamination safety and patient comfort. The proposed study integrates hyperspectral imaging with advanced image fusion, enabling non-invasive, diagnostic procedure within tissue. It utilizes near-infrared (NIR) wavelength vision that is suitable for reflections from objects within a dispersive layer, enabling the reconstruction of internal tissue layers images. It can detect objects, including cancerous tumors (presented as phantoms), inside human tissue. This involves processing data from multiple images taken in different NIR bands and merging them through image fusion techniques. Our research demonstrates evident data about objects within the diffusive media, visible only in the reconstructed images. The experimental results demonstrate a significant correlation with the samples employed in the study’s experimental design. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

1 pages, 115 KiB  
Retraction
RETRACTED: He et al. The Multi-Station Fusion-Based Radiation Source Localization Method Based on Spectrum Energy. Sensors 2025, 25, 1339
by Guojin He, Yulong Hao and Yaocong Xie
Sensors 2025, 25(16), 4976; https://doi.org/10.3390/s25164976 - 12 Aug 2025
Abstract
The journal Sensors retracts the article titled “The Multi-Station Fusion-Based Radiation Source Localization Method Based on Spectrum Energy” [...] Full article
19 pages, 1287 KiB  
Article
Extremum-Seeking Control for a Robotic Leg Prosthesis with Sensory Feedback
by Ming Pi
Sensors 2025, 25(16), 4975; https://doi.org/10.3390/s25164975 - 12 Aug 2025
Abstract
By sensing changes in the contact force between the leg and level ground, humans can perceive their walking speed and adjust leg stiffness to accommodate walking terrains. To realize this natural regulation mechanism on the lower limb amputee, noninvasive functional electrical stimulation (nFES) [...] Read more.
By sensing changes in the contact force between the leg and level ground, humans can perceive their walking speed and adjust leg stiffness to accommodate walking terrains. To realize this natural regulation mechanism on the lower limb amputee, noninvasive functional electrical stimulation (nFES) was used to assist the subject in sensing the change in contact force between the leg and level ground, allowing for the adjustment of control parameters in the prosthetic leg. The cost function was designed to combine the tracking errors of the joints and changes in the stimulating current. For different walking terrains, an extremum-seeking control (ESC) method was employed to search for suitable control parameters in real time by monitoring the changes in the cost function. The stability of the proposed controller with extremum-seeking dynamics was demonstrated. The experimental results demonstrated that the extremum-seeking method effectively adjusted the control parameters of the prosthetic leg in response to changes in the cost function. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 2733 KiB  
Article
A Cost-Effective 3D-Printed Conductive Phantom for EEG Sensing System Validation: Development, Performance Evaluation, and Comparison with State-of-the-Art Technologies
by Peter Akor, Godwin Enemali, Usman Muhammad, Jane Crowley, Marc Desmulliez and Hadi Larijani
Sensors 2025, 25(16), 4974; https://doi.org/10.3390/s25164974 - 11 Aug 2025
Abstract
This paper presents the development and validation of a cost-effective 3D-printed conductive phantom for EEG sensing system validation that achieves 85% cost reduction (£48.10 vs. £300–£500) and 48-hour fabrication time while providing consistent electrical properties suitable for standardized [...] Read more.
This paper presents the development and validation of a cost-effective 3D-printed conductive phantom for EEG sensing system validation that achieves 85% cost reduction (£48.10 vs. £300–£500) and 48-hour fabrication time while providing consistent electrical properties suitable for standardized electrode testing. The phantom was fabricated using conductive PLA filament in a two-component design with a conductive upper section and a non-conductive base for structural support. Comprehensive validation employed three complementary approaches: DC resistance measurements (821–1502 Ω), complex impedance spectroscopy at 100 Hz across anatomical regions (3.01–6.4 kΩ with capacitive behavior), and 8-channel EEG system testing (5–11 kΩ impedance range). The electrical characterization revealed spatial heterogeneity and consistent electrical properties suitable for comparative electrode evaluation and EEG sensing system validation applications. To establish context, we analyzed six existing phantom technologies including commercial injection-molded phantoms, saline solutions, hydrogels, silicone models, textile-based alternatives, and multi-material implementations. This analysis identifies critical accessibility barriers in current technologies, particularly cost constraints (£5000–20,000 tooling) and extended production timelines that limit widespread adoption. The validated 3D-printed phantom addresses these limitations while providing appropriate electrical properties for standardized EEG electrode testing. The demonstrated compatibility with clinical EEG acquisition systems establishes the phantom’s suitability for electrode performance evaluation and multi-channel system validation as a standardized testing platform, ultimately contributing to democratized access to EEG sensing system validation capabilities for broader research communities. Full article
Show Figures

Figure 1

17 pages, 6774 KiB  
Article
Optical Fiber Performance for High Solar Flux Measurements in Concentrating Solar Power Applications
by Manuel Jerez, Alejandro Carballar, Ricardo Conceição and Jose González-Aguilar
Sensors 2025, 25(16), 4973; https://doi.org/10.3390/s25164973 - 11 Aug 2025
Abstract
Extreme operating conditions in solar receivers of concentrated solar thermal power plants, such as high temperatures, intense irradiance, and thermal cycling, pose significant challenges for conventional sensors. Optical fibers offer a promising alternative for flux measurement in such environments, but their long-term performance [...] Read more.
Extreme operating conditions in solar receivers of concentrated solar thermal power plants, such as high temperatures, intense irradiance, and thermal cycling, pose significant challenges for conventional sensors. Optical fibers offer a promising alternative for flux measurement in such environments, but their long-term performance and degradation mechanisms require detailed investigation and characterization. This work presents a proof of concept for high solar flux measurement by using optical fibers as photon-capturing elements and showcases the behavior and damage that these optical fibers undergo when exposed to relevant conditions, including temperatures over 600 °C and flux levels exceeding 400 kW/m2. Three fiber configurations, including polyimide and gold-coated fibers, were tested at a high-flux solar simulator and analyzed via scanning electron microscopy to assess structural integrity and material degradation. Results reveal significant coating deterioration, fiber retraction, and thermal-induced stress effects, which impact measurement reliability. These findings provide essential insights for improving the durability and accuracy of optical fiber-based sensing technologies in concentrating solar energy. Full article
(This article belongs to the Special Issue Optical Fiber Sensors in Radiation Environments: 2nd Edition)
Show Figures

Figure 1

29 pages, 12645 KiB  
Article
The IoRT-in-Hand: Tele-Robotic Echography and Digital Twins on Mobile Devices
by Juan Bravo-Arrabal, Zhuoqi Cheng, J. J. Fernández-Lozano, Jose Antonio Gomez-Ruiz, Christian Schlette, Thiusius Rajeeth Savarimuthu, Anthony Mandow and Alfonso García-Cerezo
Sensors 2025, 25(16), 4972; https://doi.org/10.3390/s25164972 - 11 Aug 2025
Abstract
The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, [...] Read more.
The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, or where access to a medical facility is not possible. Nevertheless, touching a human safely with a robotic arm in non-engineered or even out-of-hospital environments presents substantial challenges. This article presents a novel IoRT approach for healthcare in or from remote areas, enabling interaction between a specialist’s hand and a robotic hand. We introduce the IoRT-in-hand: a smart, lightweight end-effector that extends the specialist’s hand, integrating a medical instrument, an RGB camera with servos, a force/torque sensor, and a mini-PC with Internet connectivity. Additionally, we propose an open-source Android app combining MQTT and ROS for real-time remote manipulation, alongside an Edge–Cloud architecture that links the physical robot with its Digital Twin (DT), enabling precise control and 3D visual feedback of the robot’s environment. A proof of concept is presented for the proposed tele-robotic system, using a 6-DOF manipulator with the IoRT-in-hand to perform an ultrasound scan. Teleoperation was conducted over 2300 km via a 5G NSA network on the operator side and a wired network in a laboratory on the robot side. Performance was assessed through human subject feedback, sensory data, and latency measurements, demonstrating the system’s potential for remote healthcare and emergency applications. The source code and CAD models of the IoRT-in-hand prototype are publicly available in an open-access repository to encourage reproducibility and facilitate further developments in robotic telemedicine. Full article
Show Figures

Figure 1

17 pages, 9841 KiB  
Article
Texture and Friction Classification: Optical TacTip vs. Vibrational Piezoeletric and Accelerometer Tactile Sensors
by Dexter R. Shepherd, Phil Husbands, Andrew Philippides and Chris Johnson
Sensors 2025, 25(16), 4971; https://doi.org/10.3390/s25164971 - 11 Aug 2025
Abstract
Tactile sensing is increasingly vital in robotics, especially for tasks like object manipulation and texture classification. Among tactile technologies, optical and electrical sensors are widely used, yet no rigorous direct comparison of their performance has been conducted. This paper addresses that gap by [...] Read more.
Tactile sensing is increasingly vital in robotics, especially for tasks like object manipulation and texture classification. Among tactile technologies, optical and electrical sensors are widely used, yet no rigorous direct comparison of their performance has been conducted. This paper addresses that gap by presenting a comparative study between a high-resolution optical tactile sensor (a modified TacTip) and a low-resolution electrical sensor combining accelerometers and piezoelectric elements. We evaluate both sensor types on two tasks: texture classification and coefficient of dynamic friction prediction. Various configurations and resolutions were explored, along with multiple machine learning classifiers to determine optimal performance. The optical sensor achieved 99.9% accuracy on a challenging texture dataset, significantly outperforming the electrical sensor, which reached 82%. However, for dynamic friction prediction, both sensors performed comparably, with only a 5~% accuracy difference. We also found that the optical sensor retained high classification accuracy even when image resolution was reduced to 25% of its original size, suggesting that ultra-high resolution is not essential. In conclusion, the optical sensor is the better choice when high accuracy is required. However, for low-cost or computationally efficient systems, the electrical sensor provides a practical alternative with competitive performance in some tasks. Full article
(This article belongs to the Collection Tactile Sensors, Sensing and Systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop