Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (349)

Search Parameters:
Keywords = sliding-window approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2129 KB  
Article
A Multimodal Convolutional Neural Network Framework for Intelligent Real-Time Monitoring of Etchant Levels in PCB Etching Processes
by Chuen-Sheng Cheng, Pei-Wen Chen, Hen-Yi Jen and Yu-Tang Wu
Mathematics 2025, 13(17), 2804; https://doi.org/10.3390/math13172804 - 1 Sep 2025
Abstract
In recent years, machine learning (ML) techniques have gained significant attention in time series classification tasks, particularly in industrial applications where early detection of abnormal conditions is crucial. This study proposes an intelligent monitoring framework based on a multimodal convolutional neural network (CNN) [...] Read more.
In recent years, machine learning (ML) techniques have gained significant attention in time series classification tasks, particularly in industrial applications where early detection of abnormal conditions is crucial. This study proposes an intelligent monitoring framework based on a multimodal convolutional neural network (CNN) to classify normal and abnormal copper ion (Cu2+) concentration states in the etching process in the printed circuit board (PCB) industry. Maintaining precise control Cu2+ concentration is critical in ensuring the quality and reliability of the etching processes. A sliding window approach is employed to segment the data into fixed-length intervals, enabling localized temporal feature extraction. The model fuses two input modalities—raw one-dimensional (1D) time series data and two-dimensional (2D) recurrence plots—allowing it to capture both temporal dynamics and spatial recurrence patterns. Comparative experiments with traditional machine learning classifiers and single-modality CNNs demonstrate that the proposed multimodal CNN significantly outperforms baseline models in terms of accuracy, precision, recall, F1-score, and G-measure. The results highlight the potential of multimodal deep learning in enhancing process monitoring and early fault detection in chemical-based manufacturing. This work contributes to the development of intelligent, adaptive quality control systems in the PCB industry. Full article
(This article belongs to the Special Issue Mathematics Methods of Robotics and Intelligent Systems)
Show Figures

Figure 1

21 pages, 5171 KB  
Article
FDBRP: A Data–Model Co-Optimization Framework Towards Higher-Accuracy Bearing RUL Prediction
by Muyu Lin, Qing Ye, Shiyue Na, Dongmei Qin, Xiaoyu Gao and Qiang Liu
Sensors 2025, 25(17), 5347; https://doi.org/10.3390/s25175347 - 28 Aug 2025
Viewed by 296
Abstract
This paper proposes Feature fusion and Dilated causal convolution model for Bearing Remaining useful life Prediction (FDBRP), an integrated framework for accurate Remaining Useful Life (RUL) prediction of rolling bearings that combines three key innovations: (1) a data augmentation module employing sliding-window processing [...] Read more.
This paper proposes Feature fusion and Dilated causal convolution model for Bearing Remaining useful life Prediction (FDBRP), an integrated framework for accurate Remaining Useful Life (RUL) prediction of rolling bearings that combines three key innovations: (1) a data augmentation module employing sliding-window processing and two-dimensional feature concatenation with label normalization to enhance signal representation and improve model generalizability, (2) a feature fusion module incorporating an enhanced graph convolutional network for spatial modeling, an improved multi-scale temporal convolution for dynamic pattern extraction, and an efficient multi-scale attention mechanism to optimize spatiotemporal feature consistency, and (3) an optimized dilated convolution module utilizing interval sampling to expand the receptive field, and combines the residual connection structure to realize the regularization of the neural network and enhance the ability of the model to capture long-range dependencies. Experimental validation showcases the effectiveness of proposed approach, achieving a high average score of 0.756564 and demonstrating a lower average error of 10.903656 in RUL prediction for test bearings compared to state-of-the-art benchmarks. This highlights the superior RUL prediction capability of the proposed methodology. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

24 pages, 7604 KB  
Article
Ginseng-YOLO: Integrating Local Attention, Efficient Downsampling, and Slide Loss for Robust Ginseng Grading
by Yue Yu, Dongming Li, Shaozhong Song, Haohai You, Lijuan Zhang and Jian Li
Horticulturae 2025, 11(9), 1010; https://doi.org/10.3390/horticulturae11091010 - 25 Aug 2025
Viewed by 463
Abstract
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and [...] Read more.
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and deployment-friendly object detection model for automated ginseng grade classification. The model is built on the YOLOv11n (You Only Look Once11n) framework and integrates three complementary components: (1) C2-LWA, a cross-stage local window attention module that enhances discrimination of key visual features, such as primary root contours and fibrous textures; (2) ADown, a non-parametric downsampling mechanism that substitutes convolution operations with parallel pooling, markedly reducing computational complexity; and (3) Slide Loss, a piecewise IoU-weighted loss function designed to emphasize learning from samples with ambiguous or irregular boundaries. Experimental results on a curated multi-grade ginseng dataset indicate that Ginseng-YOLO achieves a Precision of 84.9%, a Recall of 83.9%, and an mAP@50 of 88.7%, outperforming YOLOv11n and other state-of-the-art variants. The model maintains a compact footprint, with 2.0 M parameters, 5.3 GFLOPs, and 4.6 MB model size, supporting real-time deployment on edge devices. Ablation studies further confirm the synergistic contributions of the proposed modules in enhancing feature representation, architectural efficiency, and training robustness. Successful deployment on the NVIDIA Jetson Nano demonstrates practical real-time inference capability under limited computational resources. This work provides a scalable approach for intelligent grading of forest-grown ginseng and offers methodological insights for the design of lightweight models in medicinal plants and agricultural applications. Full article
(This article belongs to the Section Medicinals, Herbs, and Specialty Crops)
Show Figures

Figure 1

29 pages, 7847 KB  
Article
Depthwise-Separable U-Net for Wearable Sensor-Based Human Activity Recognition
by Yoo-Kyung Lee, Chang-Sik Son and Won-Seok Kang
Appl. Sci. 2025, 15(16), 9134; https://doi.org/10.3390/app15169134 - 19 Aug 2025
Viewed by 332
Abstract
In wearable sensor-based human activity recognition (HAR), the traditional sliding window method encounters the challenge of multiclass windows in which multiple actions are combined within a single window. To address this problem, an approach that predicts activities at each point in time within [...] Read more.
In wearable sensor-based human activity recognition (HAR), the traditional sliding window method encounters the challenge of multiclass windows in which multiple actions are combined within a single window. To address this problem, an approach that predicts activities at each point in time within a sequence has been proposed, and U-Net-based models have proven to be effective owing to their excellent space-time feature restoration capabilities. However, these models have limitations in that they are prone to overfitting owing to their large number of parameters and are not suitable for deployment. In this study, a lightweight U-Net was designed by replacing all standard U-Net convolutions with depthwise separable convolutions to implement dense prediction. Compared with existing U-Net-based models, the proposed model reduces the number of parameters by 57–89%. When evaluated on three benchmark datasets (MHEALTH, PAMAP2, and WISDM) using subject-independent splits, the performance of the proposed model was equal to or superior to that of all comparison models. Notably, on the MHEALTH dataset, which was collected in an uncontrolled environment, the proposed model improved accuracy by 7.89%, demonstrating its applicability to real-world wearable HAR systems. Full article
Show Figures

Figure 1

21 pages, 7834 KB  
Article
Robust and Adaptive Ambiguity Resolution Strategy in Continuous Time and Frequency Transfer
by Kun Wu, Weijin Qin, Daqian Lv, Wenjun Wu, Pei Wei and Xuhai Yang
Remote Sens. 2025, 17(16), 2878; https://doi.org/10.3390/rs17162878 - 18 Aug 2025
Viewed by 445
Abstract
The integer precise point positioning (IPPP) technique significantly improves the accuracy of positioning and time and frequency transfer by restoring the integer nature of carrier-phase ambiguities. However, in practical applications, IPPP performance is often degraded by day-boundary discontinuities and instances of incorrect ambiguity [...] Read more.
The integer precise point positioning (IPPP) technique significantly improves the accuracy of positioning and time and frequency transfer by restoring the integer nature of carrier-phase ambiguities. However, in practical applications, IPPP performance is often degraded by day-boundary discontinuities and instances of incorrect ambiguity resolution, which can compromise the reliability of time transfer. To address these challenges and enable continuous, robust, and stable IPPP time transfer, this study proposes an effective approach that utilizes narrow-lane ambiguities to absorb receiver clock jumps, combined with a robust sliding-window weighting strategy that fully exploits multi-epoch information. This method effectively mitigates day-boundary discontinuities and employs adaptive thresholding to enhance error detection and mitigate the impact of incorrect ambiguity resolution. Experimental results show that at an averaging time of 76,800 s, the frequency stabilities of GPS, Galileo, and BDS IPPP reach 4.838 × 10−16, 4.707 × 10−16, and 5.403 × 10−16, respectively. In the simulation scenario, the carrier-phase residual under the IGIII scheme is 6.7 cm, whereas the robust sliding-window weighting method yields a lower residual of 5.2 cm, demonstrating improved performance. In the zero-baseline time link, GPS IPPP achieves stability at the 10−17 level. Compared to optical fiber time transfer, the GPS IPPP solution demonstrates superior long-term performance in differential analysis. For both short- and long-baseline links, IPPP consistently outperforms the PPP float solution and IGS final products. Specifically, at an averaging time of 307,200 s, IPPP improves average frequency stability by approximately 29.3% over PPP and 32.6% over the IGS final products. Full article
Show Figures

Figure 1

18 pages, 2659 KB  
Article
Bidirectional Gated Recurrent Unit (BiGRU)-Based Model for Concrete Gravity Dam Displacement Prediction
by Jianxin Ma, Xiaobing Huang, Haoran Wu, Kang Yan and Yong Liu
Sustainability 2025, 17(16), 7401; https://doi.org/10.3390/su17167401 - 15 Aug 2025
Viewed by 391
Abstract
Dam displacement serves as a critical visual indicator for assessing structural integrity and stability in dam engineering. Data-driven displacement forecasting has become essential for modern dam safety monitoring systems, though conventional approaches—including statistical models and basic machine learning techniques—often fail to capture comprehensive [...] Read more.
Dam displacement serves as a critical visual indicator for assessing structural integrity and stability in dam engineering. Data-driven displacement forecasting has become essential for modern dam safety monitoring systems, though conventional approaches—including statistical models and basic machine learning techniques—often fail to capture comprehensive feature representations from multivariate environmental influences. To address these challenges, a bidirectional gated recurrent unit (BiGRU)-enhanced neural network is developed, incorporating sliding window mechanisms to model time-dependent hysteresis characteristics. The BiGRU’s architecture systematically integrates historical temporal patterns through overlapping window segmentation, enabling dual-directional sequence processing via forward–backward gate structures. Validated with four instrumented measurement points from a major concrete gravity dam, the proposed model exhibits significantly better performance against three widely used recurrent neural network benchmarks in displacement prediction tasks. These results confirm the model’s capability to deliver high-fidelity displacement forecasts with operational stability, establishing a robust framework for infrastructure health monitoring applications. Full article
Show Figures

Figure 1

11 pages, 697 KB  
Data Descriptor
A Multi-Sensor Dataset for Human Activity Recognition Using Inertial and Orientation Data
by Jhonathan L. Rivas-Caicedo, Laura Saldaña-Aristizabal, Kevin Niño-Tejada and Juan F. Patarroyo-Montenegro
Data 2025, 10(8), 129; https://doi.org/10.3390/data10080129 - 14 Aug 2025
Viewed by 458
Abstract
Human Activity Recognition (HAR) using wearable sensors is an increasingly relevant area for applications in healthcare, rehabilitation, and human–computer interaction. However, publicly available datasets that provide multi-sensor, synchronized data combining inertial and orientation measurements are still limited. This work introduces a publicly available [...] Read more.
Human Activity Recognition (HAR) using wearable sensors is an increasingly relevant area for applications in healthcare, rehabilitation, and human–computer interaction. However, publicly available datasets that provide multi-sensor, synchronized data combining inertial and orientation measurements are still limited. This work introduces a publicly available dataset for Human Activity Recognition, captured using wearable sensors placed on the chest, hands, and knees. Each device recorded inertial and orientation data during controlled activity sessions involving participants aged 20 to 70. A standardized acquisition protocol ensured consistent temporal alignment across all signals. The dataset was preprocessed and segmented using a sliding window approach. An initial baseline classification experiment, employing a Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM) model, demonstrated an average accuracy of 93.5% in classifying activities. The dataset is publicly available in CSV format and includes raw sensor signals, activity labels, and metadata. This dataset offers a valuable resource for evaluating machine learning models, studying distributed HAR approaches, and developing robust activity recognition pipelines utilizing wearable technologies. Full article
Show Figures

Figure 1

17 pages, 2380 KB  
Article
Robust Visual-Inertial Odometry with Learning-Based Line Features in a Illumination-Changing Environment
by Xinkai Li, Cong Liu and Xu Yan
Sensors 2025, 25(16), 5029; https://doi.org/10.3390/s25165029 - 13 Aug 2025
Viewed by 470
Abstract
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To [...] Read more.
Visual-Inertial Odometry (VIO) systems often suffer from degraded performance in environments with low texture. Although some previous works have combined line features with point features to mitigate this problem, the line features still degrade under more challenging conditions, such as varying illumination. To tackle this, we propose DeepLine-VIO, a robust VIO framework that integrates learned line features extracted via an attraction-field-based deep network. These features are geometrically consistent and illumination-invariant, offering improved visual robustness in challenging conditions. Our system tightly couples these learned line features with point observations and inertial data within a sliding-window optimization framework. We further introduce a geometry-aware filtering and parameterization strategy to ensure the reliability of extracted line segments. Extensive experiments on the EuRoC dataset under synthetic illumination perturbations show that DeepLine-VIO consistently outperforms existing point- and line-based methods. On the most challenging sequences under illumination-changing conditions, our approach reduces Absolute Trajectory Error (ATE) by up to 15.87% and improves Relative Pose Error (RPE) in translation by up to 58.45% compared to PL-VINS. These results highlight the robustness and accuracy of DeepLine-VIO in visually degraded environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 2072 KB  
Article
Advancing Cognitive Load Detection in Simulated Driving Scenarios Through Deep Learning and fNIRS Data
by Mehshan Ahmed Khan, Houshyar Asadi, Mohammad Reza Chalak Qazani, Ghazal Bargshady, Sam Oladazimi, Thuong Hoang, Ghazal Rahimzadeh, Zoran Najdovski, Lei Wei, Hirash Moradi and Saeid Nahavandi
Sensors 2025, 25(16), 4921; https://doi.org/10.3390/s25164921 - 9 Aug 2025
Viewed by 391
Abstract
The shift from manual to conditionally automated driving, supported by Advanced Driving Assistance Systems (ADASs), introduces challenges, particularly increased crash risks due to human factors like cognitive overload. Driving simulators provide a safe and controlled setting to study these human factors under complex [...] Read more.
The shift from manual to conditionally automated driving, supported by Advanced Driving Assistance Systems (ADASs), introduces challenges, particularly increased crash risks due to human factors like cognitive overload. Driving simulators provide a safe and controlled setting to study these human factors under complex conditions. This study leverages Functional Near-Infrared Spectroscopy (fNIRS) to dynamically assess cognitive load in a realistic driving simulator during a challenging night-time-rain scenario. Thirty-eight participants performed an auditory n-back task (0-, 1-, and 2-back) while driving, simulating multitasking demands. A sliding window approach was applied to the time-series fNIRS data to capture short-term fluctuations in brain activation. The data were analyzed using EEGNet, a deep learning model, with both overlapping and non-overlapping temporal segmentation strategies. Results revealed that classification performance is significantly influenced by the learning rate and windowing method. Notably, a learning rate of 0.001 yielded the highest performance, with 100% accuracy using overlapping windows and 97% accuracy with non-overlapping windows. These findings highlight the potential of combining fNIRS and deep learning for real-time cognitive load monitoring in simulated driving scenarios and demonstrate the importance of temporal modeling in physiological signal analysis. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

18 pages, 4452 KB  
Article
Upper Limb Joint Angle Estimation Using a Reduced Number of IMU Sensors and Recurrent Neural Networks
by Kevin Niño-Tejada, Laura Saldaña-Aristizábal, Jhonathan L. Rivas-Caicedo and Juan F. Patarroyo-Montenegro
Electronics 2025, 14(15), 3039; https://doi.org/10.3390/electronics14153039 - 30 Jul 2025
Viewed by 565
Abstract
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide [...] Read more.
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide precise tracking but are constrained to controlled laboratory environments. This study presents a deep learning-based approach for estimating shoulder and elbow joint angles using only three IMU sensors positioned on the chest and both wrists, validated against reference angles obtained from a MoCap system. The input data includes Euler angles, accelerometer, and gyroscope data, synchronized and segmented into sliding windows. Two recurrent neural network architectures, Convolutional Neural Network with Long-short Term Memory (CNN-LSTM) and Bidirectional LSTM (BLSTM), were trained and evaluated using identical conditions. The CNN component enabled the LSTM to extract spatial features that enhance sequential pattern learning, improving angle reconstruction. Both models achieved accurate estimation performance: CNN-LSTM yielded lower Mean Absolute Error (MAE) in smooth trajectories, while BLSTM provided smoother predictions but underestimated some peak movements, especially in the primary axes of rotation. These findings support the development of scalable, deep learning-based wearable systems and contribute to future applications in clinical assessment, sports performance analysis, and human motion research. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

22 pages, 1781 KB  
Article
Analyzing Heart Rate Variability for COVID-19 ICU Mortality Prediction Using Continuous Signal Processing Techniques
by Guilherme David, André Lourenço, Cristiana P. Von Rekowski, Iola Pinto, Cecília R. C. Calado and Luís Bento
J. Clin. Med. 2025, 14(15), 5312; https://doi.org/10.3390/jcm14155312 - 28 Jul 2025
Viewed by 429
Abstract
Background/Objectives: Heart rate variability (HRV) has been widely investigated as a predictor of disease and mortality across diverse patient populations; however, there remains no consensus on the optimal set or combination of time and frequency domain nor on nonlinear features for reliable prediction [...] Read more.
Background/Objectives: Heart rate variability (HRV) has been widely investigated as a predictor of disease and mortality across diverse patient populations; however, there remains no consensus on the optimal set or combination of time and frequency domain nor on nonlinear features for reliable prediction across clinical contexts. Given the relevance of the COVID-19 pandemic and the unique clinical profiles of these patients, this retrospective observational study explored the potential of HRV analysis for early prediction of in-hospital mortality using ECG signals recorded during the initial moments of ICU admission in COVID-19 patients. Methods: HRV indices were extracted from four ECG leads (I, II, III, and aVF) using sliding windows of 2, 5, and 7 min across observation intervals of 15, 30, and 60 min. The raw data posed significant challenges in terms of structure, synchronization, and signal quality; thus, from an original set of 381 records from 321 patients, after data pre-processing steps, a final dataset of 82 patients was selected for analysis. To manage data complexity and evaluate predictive performance, two feature selection methods, four feature reduction techniques, and five classification models were applied to identify the optimal approach. Results: Among the feature aggregation methods, compiling feature means across patient windows (Method D) yielded the best results, particularly for longer observation intervals (e.g., using LDA, the best AUC of 0.82±0.13 was obtained with Method D versus 0.63±0.09 with Method C using 5 min windows). Linear Discriminant Analysis (LDA) was the most consistent classification algorithm, demonstrating robust performance across various time windows and further improvement with dimensionality reduction. Although Gradient Boosting and Random Forest also achieved high AUCs and F1-scores, their performance outcomes varied across time intervals. Conclusions: These findings support the feasibility and clinical relevance of using short-term HRV as a noninvasive, data-driven tool for early risk stratification in critical care, potentially guiding timely therapeutic decisions in high-risk ICU patients and thereby reducing in-hospital mortality. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

31 pages, 4576 KB  
Article
Detection, Isolation, and Identification of Multiplicative Faults in a DC Motor and Amplifier Using Parameter Estimation Techniques
by Sanja Antić, Marko Rosić, Branko Koprivica, Alenka Milovanović and Milentije Luković
Appl. Sci. 2025, 15(15), 8322; https://doi.org/10.3390/app15158322 - 26 Jul 2025
Viewed by 402
Abstract
The increasing complexity of modern control systems highlights the need for reliable and robust fault detection, isolation, and identification (FDII) methods, particularly in safety-critical and industrial applications. The study focuses on the FDII of multiplicative faults in a DC motor and its electronic [...] Read more.
The increasing complexity of modern control systems highlights the need for reliable and robust fault detection, isolation, and identification (FDII) methods, particularly in safety-critical and industrial applications. The study focuses on the FDII of multiplicative faults in a DC motor and its electronic amplifier. To simulate such scenarios, a complete laboratory platform was developed for real-time FDII, using relay-based switching and custom LabVIEW software 2009. This platform enables real-time experimentation and represents an important component of the study. Two estimation-based fault detection (FD) algorithms were implemented: the Sliding Window Algorithm (SWA) for discrete-time models and a modified Sliding Integral Algorithm (SIA) for continuous-time models. The modification introduced to the SIA limits the data length used in least squares estimation, thereby reducing the impact of transient effects on parameter accuracy. Both algorithms achieved high model output-to-measured signal agreement, up to 98.6% under nominal conditions and above 95% during almost all fault scenarios. Moreover, the proposed fault isolation and identification methods, including a decision algorithm and an indirect estimation approach, successfully isolated and identified faults in key components such as amplifier resistors (R1, R9, R12), capacitor (C8), and motor parameters, including armature resistance (Ra), inertia (J), and friction coefficient (B). The decision algorithm, based on continuous-time model coefficients, demonstrated reliable fault isolation and identification, while the reduced Jacobian-based approach in the discrete model enhanced fault magnitude estimation, with deviations typically below 10%. Additionally, the platform supports remote experimentation, offering a valuable resource for advancing model-based FDII research and engineering education. Full article
Show Figures

Figure 1

29 pages, 6397 KB  
Article
A Hybrid GAS-ATT-LSTM Architecture for Predicting Non-Stationary Financial Time Series
by Kevin Astudillo, Miguel Flores, Mateo Soliz, Guillermo Ferreira and José Varela-Aldás
Mathematics 2025, 13(14), 2300; https://doi.org/10.3390/math13142300 - 18 Jul 2025
Viewed by 540
Abstract
This study proposes a hybrid approach to analyze and forecast non-stationary financial time series by combining statistical models with deep neural networks. A model is introduced that integrates three key components: the Generalized Autoregressive Score (GAS) model, which captures volatility dynamics; an attention [...] Read more.
This study proposes a hybrid approach to analyze and forecast non-stationary financial time series by combining statistical models with deep neural networks. A model is introduced that integrates three key components: the Generalized Autoregressive Score (GAS) model, which captures volatility dynamics; an attention mechanism (ATT), which identifies the most relevant features within the sequence; and a Long Short-Term Memory (LSTM) neural network, which receives the outputs of the previous modules to generate price forecasts. This architecture is referred to as GAS-ATT-LSTM. Both unidirectional and bidirectional variants were evaluated using real financial data from the Nasdaq Composite Index, Invesco QQQ Trust, ProShares UltraPro QQQ, Bitcoin, and gold and silver futures. The proposed model’s performance was compared against five benchmark architectures: LSTM Bidirectional, GARCH-LSTM Bidirectional, ATT-LSTM, GAS-LSTM, and GAS-LSTM Bidirectional, under sliding windows of 3, 5, and 7 days. The results show that GAS-ATT-LSTM, particularly in its bidirectional form, consistently outperforms the benchmark models across most assets and forecasting horizons. It stands out for its adaptability to varying volatility levels and temporal structures, achieving significant improvements in both accuracy and stability. These findings confirm the effectiveness of the proposed hybrid model as a robust tool for forecasting complex financial time series. Full article
Show Figures

Figure 1

20 pages, 3710 KB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 694
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

18 pages, 3913 KB  
Article
A Fracture Extraction Method for Full-Diameter Core CT Images Based on Semantic Segmentation
by Ruiqi Huang, Dexin Qiao, Gang Hui, Xi Liu, Qianxiao Su, Wenjie Wang, Jianzhong Bi and Yili Ren
Processes 2025, 13(7), 2221; https://doi.org/10.3390/pr13072221 - 11 Jul 2025
Viewed by 473
Abstract
Fractures play a critical role in the storage and migration of hydrocarbons within subsurface reservoirs, and their characteristics can be effectively studied through core sample analysis. This study proposes an automated fracture extraction method for full-diameter core Computed Tomography (CT) images based on [...] Read more.
Fractures play a critical role in the storage and migration of hydrocarbons within subsurface reservoirs, and their characteristics can be effectively studied through core sample analysis. This study proposes an automated fracture extraction method for full-diameter core Computed Tomography (CT) images based on a deep learning framework. A semantic segmentation network called SCTNet is employed to perform high-precision semantic segmentation, while a sliding window strategy is introduced to address the challenges associated with large-scale image processing during training and inference. The proposed method achieves a mean Intersection over Union (mIoU) of 72.14% and a pixel-level segmentation accuracy of 97% on the test dataset, outperforming traditional thresholding techniques and several state-of-the-art deep learning models. Besides fracture detection, the method enables quantitative characterization of fracture-related parameters, including fracture proportion, dip angle, strike, and aperture. Experimental results indicate that the proposed approach provides a reliable and efficient solution for the interpretation of large-volume CT data. Compared to manual evaluation, the method significantly accelerates the analysis process—reducing time from hours to minutes—and demonstrates strong potential to enhance intelligent workflows for geological core fracture analysis. Full article
(This article belongs to the Topic Exploitation and Underground Storage of Oil and Gas)
Show Figures

Figure 1

Back to TopTop