Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,059)

Search Parameters:
Keywords = joint classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1554 KB  
Article
Real-Time Joint Fault Detection and Diagnosis of Hexapod Robot Based on Improved Random Forest
by Qilei Fang, Yifan Men, Kai Zhang, Man Yu and Yin Liu
Processes 2025, 13(9), 2762; https://doi.org/10.3390/pr13092762 (registering DOI) - 28 Aug 2025
Abstract
In the field of robotic fault detection, although the random forest (RF) algorithm is widely adopted, its limited accuracy remains a critical constraint in practical engineering applications. To address this technical challenge, this study proposes a Two-Stages Random Forest (TSRF) algorithm. This approach [...] Read more.
In the field of robotic fault detection, although the random forest (RF) algorithm is widely adopted, its limited accuracy remains a critical constraint in practical engineering applications. To address this technical challenge, this study proposes a Two-Stages Random Forest (TSRF) algorithm. This approach constructs a hierarchical architecture with a dynamic adaptive weighting strategy, where the class probability vectors generated in the 1st-stage serve as meta-features for the 2nd-stage classifier. Such hierarchical optimization enables the model to precisely identify fault-sensitive features, effectively overcoming the performance limitations of conventional single-model frameworks. To validate the proposed approach, we conducted comparative experiments using a multidimensional kinematic feature dataset from hexapod robot joint fault detection. Benchmark models included geometry-feature-based RF and physics-informed RF as established baselines. Experimental results demonstrate that TSRF achieves a classification accuracy of 99.7% on the test set, representing an 18.8% improvement over standard RF. This significant advancement provides a novel methodological framework for intelligent fault diagnosis in complex electromechanical systems. Full article
(This article belongs to the Section Process Control and Monitoring)
27 pages, 2379 KB  
Article
Dual-Branch EfficientNet Model with Hybrid Triplet Loss for Architectural Era Classification of Traditional Dwellings in Longzhong Region, Gansu Province
by Shangbo Miao, Yalin Miao, Chenxi Zhang and Yushun Piao
Buildings 2025, 15(17), 3086; https://doi.org/10.3390/buildings15173086 - 28 Aug 2025
Abstract
Traditional vernacular architecture is an important component of historical and cultural heritage, and the accurate identification of its construction period is of great significance for architectural heritage conservation, historical research, and urban–rural planning. However, traditional methods for period identification are labor-intensive, potentially damaging [...] Read more.
Traditional vernacular architecture is an important component of historical and cultural heritage, and the accurate identification of its construction period is of great significance for architectural heritage conservation, historical research, and urban–rural planning. However, traditional methods for period identification are labor-intensive, potentially damaging to buildings, and lack sufficient accuracy. To address these issues, this study proposes a deep learning-based method for classifying the construction periods of traditional vernacular architecture. A dataset of traditional vernacular architecture images from the Longzhong region of Gansu Province was constructed, covering four periods: before 1911, 1912–1949, 1950–1980, and from 1981 to the present, with a total of 1181 images. Through comparative analysis of three mainstream models—ResNet50, EfficientNet-b4, and Vision Transformer—we found that EfficientNet demonstrated optimal performance in the classification task, achieving Accuracy, Precision, Recall, and F1-scores of 85.1%, 81.6%, 81.0%, and 81.1%, respectively. These metrics surpassed ResNet50 by 1.4%, 1.3%, 0.5%, and 1.2%, and outperformed Vision Transformer by 8.1%, 9.1%, 9.5%, and 9.1%, respectively. To further improve feature extraction and classification accuracy, we propose the “local–global feature joint learning network architecture” (DualBranchEfficientNet). This dual-branch design, comprising a global feature branch and a local feature branch, effectively integrates global structure with local details and significantly enhances classification performance. The proposed architecture achieved Accuracy, Precision, Recall, and F1-scores of 89.6%, 87.7%, 86.0%, and 86.7%, respectively, with DualBranchEfficientNet exhibiting a 2.0% higher Accuracy than DualBranchResNet. To address sample imbalance, a hybrid triplet loss function (Focal Loss + Triplet Loss) was introduced, and its effectiveness in identifying minority class samples was validated through ablation experiments. Experimental results show that the DualBranchEfficientNet model with the hybrid triplet loss outperforms traditional models across all evaluation metrics, particularly in the data-scarce 1950–1980 period, where Recall increased by 7.3% and F1-score by 4.1%. Finally, interpretability analysis via Grad-CAM heat maps demonstrates that the DualBranchEfficientNet model incorporating hybrid triplet loss accurately pinpoints the key discriminative regions of traditional dwellings across different eras, and its focus closely aligns with those identified by conventional methods. This study provides an efficient, accurate, and scalable deep learning solution for the period identification of traditional vernacular architecture. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

16 pages, 1386 KB  
Article
Balancing Energy Consumption and Detection Accuracy in Cardiovascular Disease Diagnosis: A Spiking Neural Network-Based Approach with ECG and PCG Signals
by Guihao Ran, Yijing Wang, Han Zhang, Jiahui Cheng and Dakun Lai
Sensors 2025, 25(17), 5263; https://doi.org/10.3390/s25175263 - 24 Aug 2025
Viewed by 223
Abstract
Electrocardiogram (ECG) and phonocardiogram (PCG) signals are widely used in the early prevention and diagnosis of cardiovascular diseases (CVDs) due to their ability to accurately reflect cardiac conditions from different physiological perspectives and their ease of acquisition. Currently, some studies have explored the [...] Read more.
Electrocardiogram (ECG) and phonocardiogram (PCG) signals are widely used in the early prevention and diagnosis of cardiovascular diseases (CVDs) due to their ability to accurately reflect cardiac conditions from different physiological perspectives and their ease of acquisition. Currently, some studies have explored the joint use of ECG and PCG signals for disease screening, but few studies have considered the trade-off between classification performance and energy consumption in model design. In this study, we propose a multimodal CVDs detection framework based on Spiking Neural Networks (SNNs), which integrates ECG and PCG signals. A differential fusion strategy at the signal level is employed to generate a fused EPCG signal, from which time–frequency features are extracted using the Adaptive Superlets Transform (ASLT). Two separate Spiking Convolutional Neural Network (SCNN) models are then trained on the ECG and EPCG signals, respectively. A confidence-based dynamic decision-level (CDD) fusion strategy is subsequently employed to perform the final classification. The proposed method is validated on the PhysioNet/CinC Challenge 2016 dataset, achieving an accuracy of 89.74%, an AUC of 89.08%, and an energy consumption of 209.6 μJ. This method not only achieves better balancing performance compared to unimodal signals but also realizes an effective balance between model energy consumption and classification effect, which provides an effective idea for the development of low-power, multimodal medical diagnostic systems. Full article
(This article belongs to the Special Issue Sensors for Heart Rate Monitoring and Cardiovascular Disease)
Show Figures

Figure 1

28 pages, 7371 KB  
Article
Deep Fuzzy Fusion Network for Joint Hyperspectral and LiDAR Data Classification
by Guangen Liu, Jiale Song, Yonghe Chu, Lianchong Zhang, Peng Li and Junshi Xia
Remote Sens. 2025, 17(17), 2923; https://doi.org/10.3390/rs17172923 - 22 Aug 2025
Viewed by 227
Abstract
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly [...] Read more.
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly model category ambiguity; second, the feature fusion stage lacks a dynamic perception mechanism for inter-modal differences and uncertainties. To this end, this paper proposes a Deep Fuzzy Fusion Network (DFNet) for the joint classification of hyperspectral and LiDAR data. DFNet adopts a dual-branch architecture, integrating CNN and Transformer structures, respectively, to extract multi-scale spatial–spectral features from hyperspectral and LiDAR data. To enhance the model’s discriminative robustness in ambiguous regions, both branches incorporate fuzzy learning modules that model class uncertainty through learnable Gaussian membership functions. In the modality fusion stage, a Fuzzy-Enhanced Cross-Modal Fusion (FECF) module is designed, which combines membership-aware attention mechanisms with fuzzy inference operators to achieve dynamic adjustment of modality feature weights and efficient integration of complementary information. DFNet, through a hierarchical design, realizes uncertainty representation within and fusion control between modalities. The proposed DFNet is evaluated on three public datasets, and the extensive experimental results indicate that the proposed DFNet considerably outperforms other state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 4036 KB  
Article
An Online Modular Framework for Anomaly Detection and Multiclass Classification in Video Surveillance
by Jonathan Flores-Monroy, Gibran Benitez-Garcia, Mariko Nakano-Miyatake and Hiroki Takahashi
Appl. Sci. 2025, 15(17), 9249; https://doi.org/10.3390/app15179249 - 22 Aug 2025
Viewed by 223
Abstract
Video surveillance systems are a key tool for the identification of anomalous events, but they still rely heavily on human analysis, which limits their efficiency. Current video anomaly detection models aim to automatically detect such events. However, most of them provide only a [...] Read more.
Video surveillance systems are a key tool for the identification of anomalous events, but they still rely heavily on human analysis, which limits their efficiency. Current video anomaly detection models aim to automatically detect such events. However, most of them provide only a binary classification (normal or anomalous) and do not identify the specific type of anomaly. Although recent proposals address anomaly classification, they typically require full video analysis, making them unsuitable for online applications. In this work, we propose a modular framework for the joint detection and classification of anomalies, designed to operate on individual clips within continuous video streams. The architecture integrates interchangeable modules (feature extractor, detector, and classifier) and is adaptable to both offline and online scenarios. Specifically, we introduce a multi-category classifier that processes only anomalous clips, enabling efficient clip-level classification. Experiments conducted on the UCF-Crime dataset validate the effectiveness of the framework, achieving 74.77% clip-level accuracy and 58.96% video-level accuracy, surpassing prior approaches and confirming its applicability in real-world surveillance environments. Full article
Show Figures

Figure 1

32 pages, 2264 KB  
Systematic Review
Intention Prediction for Active Upper-Limb Exoskeletons in Industrial Applications: A Systematic Literature Review
by Dominik Hochreiter, Katharina Schmermbeck, Miguel Vazquez-Pufleau and Alois Ferscha
Sensors 2025, 25(17), 5225; https://doi.org/10.3390/s25175225 - 22 Aug 2025
Viewed by 242
Abstract
Intention prediction is essential for enabling intuitive and adaptive control in upper-limb exoskeletons, especially in dynamic industrial environments. However, the suitability of different cues, sensors, and computational models for real-world industrial applications remains unclear. This systematic review, conducted according to PRISMA guidelines, analyzes [...] Read more.
Intention prediction is essential for enabling intuitive and adaptive control in upper-limb exoskeletons, especially in dynamic industrial environments. However, the suitability of different cues, sensors, and computational models for real-world industrial applications remains unclear. This systematic review, conducted according to PRISMA guidelines, analyzes 29 studies published between 2007 and 2024 that investigate intention prediction in active exoskeletons. Most studies rely on motion capture (14) and electromyography (14) to estimate joint torque or trajectories, predicting from 450 ms before to 660 ms after motion onset. Approaches include model-based and model-free regression, as well as classification methods, but vary significantly in complexity, sensor setups, and evaluation procedures. Only a subset evaluates usability or support effectiveness, often under laboratory conditions with small, non-representative participant groups. Based on these insights, we outline recommendations for robust and adaptable intention prediction tailored to industrial task requirements. We propose four generalized support modes to guide sensor selection and control strategies in practical applications. Future research should leverage wearable sensors, integrate cognitive and contextual cues, and adopt transfer learning, federated learning, or LLM-based feedback mechanisms. Additionally, studies should prioritize real-world validation, diverse participant samples, and comprehensive evaluation metrics to support scalable, acceptable deployment of exoskeletons in industrial settings. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

19 pages, 2289 KB  
Article
Class-Incremental Learning-Based Few-Shot Underwater-Acoustic Target Recognition
by Wenbo Wang, Ye Li, Tongsheng Shen and Dexin Zhao
J. Mar. Sci. Eng. 2025, 13(9), 1606; https://doi.org/10.3390/jmse13091606 - 22 Aug 2025
Viewed by 141
Abstract
This paper proposes an underwater-acoustic class-incremental few-shot learning (UACIL) method for streaming data processing in practical underwater-acoustic target recognition scenarios. The core objective is to expand classification capabilities for new classes while mitigating catastrophic forgetting of existing knowledge. UACIL’s contributions encompass three key [...] Read more.
This paper proposes an underwater-acoustic class-incremental few-shot learning (UACIL) method for streaming data processing in practical underwater-acoustic target recognition scenarios. The core objective is to expand classification capabilities for new classes while mitigating catastrophic forgetting of existing knowledge. UACIL’s contributions encompass three key components: First, to enhance feature discriminability and generalization, an enhanced frequency-domain attention module is introduced to capture both spatial and temporal variation features. Second, it introduces a prototype classification mechanism with two operating modes corresponding to the base-training phase and the incremental training phase. In the base phase, sufficient pre-training is performed on the feature extraction network and the classification heads of inherent categories. In the incremental phase, for streaming data processing, only the classification heads of new categories are expanded and updated, while the parameters of the feature extractor remain stable through prototype classification. Third, a joint optimization strategy using multiple loss functions is designed to refine feature distribution. This method enables rapid deployment without complex cross-domain retraining when handling new data classes, effectively addressing overfitting and catastrophic forgetting in hydroacoustic signal classification. Experimental results with public datasets validate its superior incremental learning performance. The proposed method achieves 92.89% base recognition accuracy and maintains 68.44% overall accuracy after six increments. Compared with baseline methods, it improves base accuracy by 11.14% and reduces the incremental performance-dropping rate by 50.09%. These results demonstrate that UACIL enhances recognition accuracy while alleviating catastrophic forgetting, confirming its feasibility for practical applications. Full article
Show Figures

Figure 1

17 pages, 2868 KB  
Article
Research on Acoustic Properties of Artificial Inhomogeneities in Calibration Samples for Ultrasonic Testing of Polyethylene Pipe Welds
by Aleksandr Shikhov, Kirill Gogolinskii, Darya Kopytina, Anna Vinogradova and Aleksei Zubarev
Metrology 2025, 5(3), 51; https://doi.org/10.3390/metrology5030051 - 20 Aug 2025
Viewed by 229
Abstract
This article investigates the acoustic properties of artificial discontinuities in reference specimens for the ultrasonic testing of welded joints in polyethylene pipes. An analysis is conducted on the reflectivity of various materials (air, sand, heat-resistant silicate-based sealant, and aluminum foil) and their correspondence [...] Read more.
This article investigates the acoustic properties of artificial discontinuities in reference specimens for the ultrasonic testing of welded joints in polyethylene pipes. An analysis is conducted on the reflectivity of various materials (air, sand, heat-resistant silicate-based sealant, and aluminum foil) and their correspondence to real defects occurring in weld seams. A theoretical analysis of reflection coefficients is performed, along with laboratory studies using digital radiography and ultrasonic testing. The results demonstrate that heat-resistant silicate sealant is the most suitable material for simulating defects, as its acoustic properties closely match those of real inclusions, and its geometric parameters remain stable during the welding process. The use of such specimens enhances the reliability of ultrasonic testing and reduces the likelihood of errors in defect classification. Full article
Show Figures

Figure 1

28 pages, 8325 KB  
Article
Tunnel Rapid AI Classification (TRaiC): An Open-Source Code for 360° Tunnel Face Mapping, Discontinuity Analysis, and RAG-LLM-Powered Geo-Engineering Reporting
by Seyedahmad Mehrishal, Junsu Leem, Jineon Kim, Yulong Shao, Il-Seok Kang and Jae-Joon Song
Remote Sens. 2025, 17(16), 2891; https://doi.org/10.3390/rs17162891 - 20 Aug 2025
Viewed by 679
Abstract
Accurate and efficient rock mass characterization is essential in geotechnical engineering, yet traditional tunnel face mapping remains time consuming, subjective, and potentially hazardous. Recent advances in digital technologies and AI offer automation opportunities, but many existing solutions are hindered by slow 3D scanning, [...] Read more.
Accurate and efficient rock mass characterization is essential in geotechnical engineering, yet traditional tunnel face mapping remains time consuming, subjective, and potentially hazardous. Recent advances in digital technologies and AI offer automation opportunities, but many existing solutions are hindered by slow 3D scanning, computationally intensive processing, and limited integration flexibility. This paper presents Tunnel Rapid AI Classification (TRaiC), an open-source MATLAB-based platform for rapid and automated tunnel face mapping. TRaiC integrates single-shot 360° panoramic photography, AI-powered discontinuity detection, 3D textured digital twin generation, rock mass discontinuity characterization, and Retrieval-Augmented Generation with Large Language Models (RAG-LLM) for automated geological interpretation and standardized reporting. The modular eight-stage workflow includes simplified 3D modeling, trace segmentation, 3D joint network analysis, and rock mass classification using RMR, with outputs optimized for Geo-BIM integration. Initial evaluations indicate substantial reductions in processing time and expert assessment workload. Producing a lightweight yet high-fidelity digital twin, TRaiC enables computational efficiency, transparency, and reproducibility, serving as a foundation for future AI-assisted geotechnical engineering research. Its graphical user interface and well-structured open-source code make it accessible to users ranging from beginners to advanced researchers. Full article
Show Figures

Figure 1

20 pages, 2173 KB  
Article
Pain State Classification of Stiff Knee Joint Using Electromyogram for Robot-Based Post-Fracture Rehabilitation Training
by Yang Zheng, Dimao He, Yuan He, Xiangrui Kong, Xiaochen Fan, Min Li, Guanghua Xu and Jichao Yin
Sensors 2025, 25(16), 5142; https://doi.org/10.3390/s25165142 - 19 Aug 2025
Viewed by 441
Abstract
Knee joint stiffness occurs and severely limits its range of motion (ROM) after facture around the knee. During mobility training, knee joints need to be flexed to the maximum angle position (maxAP) that can induce pain at an appropriate level in order to [...] Read more.
Knee joint stiffness occurs and severely limits its range of motion (ROM) after facture around the knee. During mobility training, knee joints need to be flexed to the maximum angle position (maxAP) that can induce pain at an appropriate level in order to pull apart intra-articular adhesive structures while avoiding secondary injuries. However, the maxAP varies with training and is mostly determined by the pain level of patients. In this study, the feasibility of utilizing electromyogram (EMG) activities to detect maxAP was investigated. Specifically, the maxAP detection was converted into a binary classification between pain level three of the numerical rating scales (pain) and below (painless) according to clinical requirements. Firstly, 12 post-fracture patients with knee joint stiffness participated in Experiment I, with a therapist performing routine mobility training and EMG signals being recorded from knee flexors and extensors. The results showed that the extracted EMG features were significantly different between the pain and painless states. Then, the maxAP estimation performance was tested on a knee rehabilitation robot in Experiment II, with another seven patients being involved. The support vector machine and random forest models were used to classify between pain and painless states and obtained a mean accuracy of 87.90% ± 4.55% and 89.10% ± 4.39%, respectively, leading to an average estimation bias of 6.5° ± 5.1° and 4.5° ± 3.5°. These results indicated that the pain-induced EMG can be used to accurately classify pain states for the maxAP estimation in post-fracture mobility training, which can potentially facilitate the application of robotic techniques in fracture rehabilitation. Full article
Show Figures

Figure 1

26 pages, 36602 KB  
Article
FE-MCFN: Fuzzy-Enhanced Multi-Scale Cross-Modal Fusion Network for Hyperspectral and LiDAR Joint Data Classification
by Shuting Wei, Mian Jia and Junyi Duan
Algorithms 2025, 18(8), 524; https://doi.org/10.3390/a18080524 - 18 Aug 2025
Viewed by 386
Abstract
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, [...] Read more.
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, different materials” and “same material, different spectra” phenomena, as well as the complexity of spectral features. Furthermore, existing multimodal fusion approaches often fail to fully leverage the complementary advantages of hyperspectral and LiDAR data. We propose a fuzzy-enhanced multi-scale cross-modal fusion network (FE-MCFN) designed to achieve joint classification of hyperspectral and LiDAR data. The FE-MCFN enhances convolutional neural networks through the application of fuzzy theory and effectively integrates global contextual information via a cross-modal attention mechanism. The fuzzy learning module utilizes a Gaussian membership function to assign weights to features, thereby adeptly capturing uncertainties and subtle distinctions within the data. To maximize the complementary advantages of multimodal data, a fuzzy fusion module is designed, which is grounded in fuzzy rules and integrates multimodal features across various scales while taking into account both local features and global information, ultimately enhancing the model’s classification performance. Experimental results obtained from the Houston2013, Trento, and MUUFL datasets demonstrate that the proposed method outperforms current state-of-the-art classification techniques, thereby validating its effectiveness and applicability across diverse scenarios. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

26 pages, 3497 KB  
Article
A Multi-Branch Network for Integrating Spatial, Spectral, and Temporal Features in Motor Imagery EEG Classification
by Xiaoqin Lian, Chunquan Liu, Chao Gao, Ziqian Deng, Wenyang Guan and Yonggang Gong
Brain Sci. 2025, 15(8), 877; https://doi.org/10.3390/brainsci15080877 - 18 Aug 2025
Viewed by 377
Abstract
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional [...] Read more.
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional discriminative features remains a key challenge to improving MI-EEG decoding performance. Methods: To address the challenge of capturing complex spatial, spectral, and temporal features in MI-EEG signals, this study proposes a multi-branch deep neural network, which jointly models these dimensions to enhance classification performance. The network takes as inputs both a three-dimensional power spectral density tensor and two-dimensional time-domain EEG signals and incorporates four complementary feature extraction branches to capture spatial, spectral, spatial-spectral joint, and temporal dynamic features, thereby enabling unified multidimensional modeling. The model was comprehensively evaluated on two widely used public MI-EEG datasets: EEG Motor Movement/Imagery Database (EEGMMIDB) and BCI Competition IV Dataset 2a (BCIIV2A). To further assess interpretability, gradient-weighted class activation mapping (Grad-CAM) was employed to visualize the spatial and spectral features prioritized by the model. Results: On the EEGMMIDB dataset, it achieved an average classification accuracy of 86.34% and a kappa coefficient of 0.829 in the five-class task. On the BCIIV2A dataset, it reached an accuracy of 83.43% and a kappa coefficient of 0.779 in the four-class task. Conclusions: These results demonstrate that the network outperforms existing state-of-the-art methods in classification performance. Furthermore, Grad-CAM visualizations identified the key spatial channels and frequency bands attended to by the model, supporting its neurophysiological interpretability. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

27 pages, 23044 KB  
Review
Sensor-Based Monitoring of Bolted Joint Reliability in Agricultural Machinery: Performance and Environmental Challenges
by Xinyang Gu, Bangzhui Wang, Zhong Tang and Haiyang Wang
Sensors 2025, 25(16), 5098; https://doi.org/10.3390/s25165098 - 16 Aug 2025
Viewed by 505
Abstract
The structural reliability of agricultural machinery is critically dependent on bolted joints, with loosening being a significant and prevalent failure mode. Harsh operational environments (intense vibration, impact, corrosion) severely exacerbate loosening risks, compromising machinery performance and safety. Traditional periodic inspections are inadequate for [...] Read more.
The structural reliability of agricultural machinery is critically dependent on bolted joints, with loosening being a significant and prevalent failure mode. Harsh operational environments (intense vibration, impact, corrosion) severely exacerbate loosening risks, compromising machinery performance and safety. Traditional periodic inspections are inadequate for preventing sudden failures induced by loosening, leading to impaired efficiency and safety hazards. This review comprehensively analyzes the unique challenges and opportunities in monitoring bolted joint reliability within agricultural machinery. It covers the following: (1) the status of bolted joint reliability issues (failure modes, impacts, maintenance inadequacies); (2) environmental challenges to joint integrity; (3) evaluation of conventional detection methods; (4) principles and classifications of modern detection technologies (e.g., vibration-based, acoustic, direct measurement, vision-based); and (5) their application status, limitations, and techno-economic hurdles in agriculture. This review identifies significant deficiencies in current technologies for agricultural machinery bolt loosening surveillance, underscoring the pressing need for specialized, dependable, and cost-effective online monitoring systems tailored for agriculture’s demanding conditions. Finally, forward-looking research directions are outlined to enhance the reliability and intelligence of structural monitoring for agricultural machinery. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

20 pages, 7578 KB  
Article
Cross Attention Based Dual-Modality Collaboration for Hyperspectral Image and LiDAR Data Classification
by Khanzada Muzammil Hussain, Keyun Zhao, Yang Zhou, Aamir Ali and Ying Li
Remote Sens. 2025, 17(16), 2836; https://doi.org/10.3390/rs17162836 - 15 Aug 2025
Viewed by 442
Abstract
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing [...] Read more.
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing HSI and LiDAR, we can mitigate the limitations of each and improve tasks like land cover classification, vegetation analysis, and terrain mapping through more robust spectral–spatial feature representation. However, traditional multi-scale feature fusion models often struggle with aligning features effectively, which can lead to redundant outputs and diminished spatial clarity. To address these issues, we propose the Cross Attention Bridge for HSI and LiDAR (CAB-HL), a novel dual-path framework that employs a multi-stage cross-attention mechanism to guide the interaction between spectral and spatial features. In CAB-HL, features from each modality are refined across three progressive stages using cross-attention modules, which enhance contextual alignment while preserving the distinctive characteristics of each modality. These fused representations are subsequently integrated and passed through a lightweight classification head. Extensive experiments on three benchmark RS datasets demonstrate that CAB-HL consistently outperforms existing state-of-the-art models, confirm that CAB-HL consistently outperforms in learning deep joint representations for multimodal classification tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

24 pages, 19609 KB  
Article
An Attention-Enhanced Bivariate AI Model for Joint Prediction of Urban Flood Susceptibility and Inundation Depth
by Thuan Thanh Le, Tuong Quang Vo and Jongho Kim
Mathematics 2025, 13(16), 2617; https://doi.org/10.3390/math13162617 - 15 Aug 2025
Viewed by 426
Abstract
This study presents a novel bivariate-output deep learning framework based on LeNet-5 for the simultaneous prediction of urban flood susceptibility and inundation depth in Seoul, South Korea. Unlike previous studies that relied on single-output models, the proposed approach jointly learns classification and regression [...] Read more.
This study presents a novel bivariate-output deep learning framework based on LeNet-5 for the simultaneous prediction of urban flood susceptibility and inundation depth in Seoul, South Korea. Unlike previous studies that relied on single-output models, the proposed approach jointly learns classification and regression targets through a shared feature extraction structure, enhancing consistency and generalization. Among six tested architectures, the Le5SD_CBAM model—integrating a Convolutional Block Attention Module (CBAM)—achieved the best performance, with 83% accuracy, an Area Under the ROC Curve (AUC) of 0.91 for flood susceptibility classification, and a mean absolute error (MAE) of 0.12 m and root mean squared error (RMSE) of 0.18 m for depth estimation. The model’s spatial predictions aligned well with hydrological principles and past flood records, accurately identifying low-lying flood-prone zones and capturing localized inundation patterns influenced by infrastructure and micro-topography. Importantly, it detected spatial mismatches between susceptibility and depth, demonstrating the benefit of joint modeling. Variable importance analysis highlighted elevation as the dominant predictor, while distances to roads, rivers, and drainage systems were also key contributors. In contrast, secondary terrain attributes had limited influence, indicating that urban infrastructure has significantly altered natural flood flow dynamics. Although the model lacks dynamic forcings such as rainfall and upstream inflows, it remains a valuable tool for flood risk mapping in data-scarce settings. The bivariate-output framework improves computational efficiency and internal coherence compared to separate single-task models, supporting its integration into urban flood management and planning systems. Full article
Show Figures

Figure 1

Back to TopTop