Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,605)

Search Parameters:
Keywords = digital filters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3536 KB  
Article
Batch Cyclic Posterior Selection Particle Filter and Its Application in TRN
by Zhiqiang Lyu, Xingzi Qiang, Wenwu Shi, Yingkui Gong and Longxing Wu
Electronics 2025, 14(21), 4257; https://doi.org/10.3390/electronics14214257 - 30 Oct 2025
Abstract
Terrain referenced navigation (TRN) determines position by comparing terrain height measurements with digital elevation maps (DEMs). However, terrain fluctuations create multimodal observation distributions, introducing significant nonlinearity that challenges fusion positioning algorithms. To address this, we propose a novel data fusion approach: batch cyclic [...] Read more.
Terrain referenced navigation (TRN) determines position by comparing terrain height measurements with digital elevation maps (DEMs). However, terrain fluctuations create multimodal observation distributions, introducing significant nonlinearity that challenges fusion positioning algorithms. To address this, we propose a novel data fusion approach: batch cyclic posterior selection particle filter (BCPS-PF), applied to TRN. Our algorithm consists of two primary mechanisms. First, the batch cycle particle generation mechanism continuously generates particles conforming to the prior distribution. This is achieved by decomposing the state transition function and the state noise model during the prediction step. Particles from the previous time step are transformed via the state transition function, and noise sequences generated by the state noise model are added, forming batch cycle particles. Second, a particle selection mechanism filters particles to match the posterior distribution. This involves an update step in the fusion process, utilizing a rejection sampling technique. The batch cycle mechanism can be terminated by limiting the number of particles, and state estimation is derived by calculating the mean of these particles. Simulations demonstrate that our method improves positioning accuracy by over 10% compared with existing methods. Full article
(This article belongs to the Special Issue Recent Advance of Auto Navigation in Indoor Scenarios)
Show Figures

Figure 1

34 pages, 7669 KB  
Article
JSPSR: Joint Spatial Propagation Super-Resolution Networks for Enhancement of Bare-Earth Digital Elevation Models from Global Data
by Xiandong Cai and Matthew D. Wilson
Remote Sens. 2025, 17(21), 3591; https://doi.org/10.3390/rs17213591 - 30 Oct 2025
Abstract
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have [...] Read more.
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have limited coverage or accessibility. Moreover, the majority of available global DEMs have lower spatial resolutions (∼30–90 m) and contain errors introduced by surface features such as buildings and vegetation. (2) Methods: This research presents an innovative method to convert global DEMs to bare-earth DEMs while enhancing their spatial resolution as measured by the improved vertical accuracy of each pixel, combined with reduced pixel size. We propose the Joint Spatial Propagation Super-Resolution network (JSPSR), which integrates Guided Image Filtering (GIF) and Spatial Propagation Network (SPN). By leveraging guidance features extracted from remote sensing images with or without auxiliary spatial data, our method can correct elevation errors and enhance the spatial resolution of DEMs. We developed a dataset for real-world bare-earth DEM Super-Resolution (SR) problems in low-relief areas utilising open-access data. Experiments were conducted on the dataset using JSPSR and other methods to predict 3 m and 8 m spatial resolution DEMs from 30 m spatial resolution Copernicus GLO-30 DEMs. (3) Results: JSPSR improved prediction accuracy by 71.74% on Root Mean Squared Error (RMSE) and reconstruction quality by 22.9% on Peak Signal-to-Noise Ratio (PSNR) compared to bicubic interpolated GLO-30 DEMs, and achieves 56.03% and 13.8% improvement on the same items against a baseline Single Image Super Resolution (SISR) method. Overall RMSE was 1.06 m at 8 m spatial resolution and 1.1 m at 3 m, compared to 3.8 m for GLO-30, 1.8 m for FABDEM and 1.3 m for FathomDEM, at either resolution. (4) Conclusions: JSPSR outperforms other methods in bare-earth DEM super-resolution tasks, with improved elevation accuracy compared to other state-of-the-art globally available datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
17 pages, 1610 KB  
Systematic Review
Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth
by Mukhtar Ahmmad, Khurram Shahzad, Abid Iqbal and Mujahid Latif
Societies 2025, 15(11), 301; https://doi.org/10.3390/soc15110301 - 30 Oct 2025
Abstract
This systematic review synthesizes a decade of peer-reviewed research (2015–2025) examining the interplay of filter bubbles, echo chambers, and algorithmic bias in shaping youth engagement within social media. A total of 30 studies were analyzed, using the PRISMA 2020 framework, encompassing computational audits, [...] Read more.
This systematic review synthesizes a decade of peer-reviewed research (2015–2025) examining the interplay of filter bubbles, echo chambers, and algorithmic bias in shaping youth engagement within social media. A total of 30 studies were analyzed, using the PRISMA 2020 framework, encompassing computational audits, simulation modeling, surveys, ethnographic accounts, and mixed-methods designs across diverse platforms, including Facebook, YouTube, Twitter/X, Instagram, TikTok, and Weibo. Results reveal three consistent patterns: (i) algorithmic systems structurally amplify ideological homogeneity, reinforcing selective exposure and limiting viewpoint diversity; (ii) youth demonstrate partial awareness and adaptive strategies to navigate algorithmic feeds, though their agency is constrained by opaque recommender systems and uneven digital literacy; and (iii) echo chambers not only foster ideological polarization but also serve as spaces for identity reinforcement and cultural belonging. Despite these insights, the evidence base suffers from geographic bias toward Western contexts, limited longitudinal research, methodological fragmentation, and conceptual ambiguity in key definitions. This review highlights the need for integrative, cross-cultural, and youth-centered approaches that bridge empirical evidence with lived experiences. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

20 pages, 14554 KB  
Article
High-Resolution Flood Risk Assessment in Small Streams Using DSM–DEM Integration and Airborne LiDAR Data
by Seung-Jun Lee, Yong-Sik Han, Ji-Sung Kim and Hong-Sik Yun
Sustainability 2025, 17(21), 9616; https://doi.org/10.3390/su17219616 - 29 Oct 2025
Abstract
Flood risk in small streams is rising under climate change, as small catchments are highly vulnerable to short, intense storms. We develop a high-resolution assessment that integrates a Digital Surface Model (DSM), a Digital Elevation Model (DEM), and airborne LiDAR within a MATLAB [...] Read more.
Flood risk in small streams is rising under climate change, as small catchments are highly vulnerable to short, intense storms. We develop a high-resolution assessment that integrates a Digital Surface Model (DSM), a Digital Elevation Model (DEM), and airborne LiDAR within a MATLAB (2025b) hydraulic workflow. A hybrid elevation model uses the DEM as baseline and selectively retains DSM-derived structures (levees, bridges, embankments), while filtering vegetation via DSM–DEM differencing with a 1.0 m threshold and a 2-pixel kernel. We simulate 10-, 30-, 50-, 100-, and 200-year return periods and calibrate the 200-year case to the July 2025 Sancheong event (793.5 mm over 105 h; peak 100 mm h−1). The hybrid approach improves predictions over DEM-only runs, capturing localized depth increases of 1.5–2.0 m behind embankments and reducing false positives in vegetated areas by 12–18% relative to raw DSM use. Multi-frequency maps show progressive expansion of inundation; in the 100-year scenario, 68% of the inundated area exceeds 2.0 m depth, while 0–1.0 m zones comprise only 13% of the footprint. Unlike previous DSM–DEM studies, this work introduces a selective integration approach that distinguishes structural and vegetative features to improve the physical realism of small-stream flood modeling. This transferable framework supports climate adaptation, emergency response planning, and sustainable watershed management in small-stream basins. Full article
Show Figures

Figure 1

31 pages, 1382 KB  
Review
Towards Sustainable Buildings and Energy Communities: AI-Driven Transactive Energy, Smart Local Microgrids, and Life Cycle Integration
by Andrzej Ożadowicz
Energies 2025, 18(21), 5668; https://doi.org/10.3390/en18215668 - 29 Oct 2025
Abstract
The transition towards sustainable and low-carbon energy systems highlights the crucial role of buildings, microgrids, and local communities as key actors in enhancing resilience and achieving decarbonization targets. The application of artificial intelligence (AI) is of paramount importance as it enables accurate prediction, [...] Read more.
The transition towards sustainable and low-carbon energy systems highlights the crucial role of buildings, microgrids, and local communities as key actors in enhancing resilience and achieving decarbonization targets. The application of artificial intelligence (AI) is of paramount importance as it enables accurate prediction, adaptive control, and optimization of distributed resources. This paper reviews recent advances in AI applications for transactive energy (TE) and dynamic energy management (DEM), focusing on their integration with building automation, microgrid coordination, and community energy exchanges. It also considers the emerging role of life cycle-based methods, such as life cycle assessment (LCA) and life cycle cost (LCC), in extending operational intelligence to long-term environmental and economic objectives. The analysis is based on a curated set of 97 publications identified through structured queries and thematic filtering. The findings indicate substantial advancement in methodological approaches, notably reinforcement learning (RL), hybrid model predictive control, federated and edge AI, and digital twin applications. However, this study also uncovers shortcomings in the integration and interoperability of sustainability. This paper contributes by consolidating fragmented research and proposing a multi-layered AI framework that aligns short-term performance with long-term resilience and sustainability. Full article
Show Figures

Figure 1

19 pages, 2107 KB  
Article
Multi-Feature Fusion and Cloud Restoration-Based Approach for Remote Sensing Extraction of Lake and Reservoir Water Bodies in Bijie City
by Bai Xue, Yiying Wang, Yanru Song, Changru Liu and Pi Ai
Appl. Sci. 2025, 15(21), 11490; https://doi.org/10.3390/app152111490 - 28 Oct 2025
Abstract
Current lake and reservoir water body extraction algorithms are confronted with two critical challenges: (1) design dependency on specific geographical features, leading to constrained cross-regional adaptability (e.g., the JRC Global Water Body Dataset achieves ~90% overall accuracy globally, while the ESA WorldCover 2020 [...] Read more.
Current lake and reservoir water body extraction algorithms are confronted with two critical challenges: (1) design dependency on specific geographical features, leading to constrained cross-regional adaptability (e.g., the JRC Global Water Body Dataset achieves ~90% overall accuracy globally, while the ESA WorldCover 2020 reaches ~92% for water body classification, both showing degraded performance in complex karst terrains); (2) information loss due to cloud occlusion, compromising dynamic monitoring accuracy. To address these limitations, this study presents a multi-feature fusion and multi-level hierarchical extraction algorithm for lake and reservoir water bodies, leveraging the Google Earth Engine (GEE) cloud platform and Sentinel-2 multispectral imagery in the karst landscape of Bijie City. The proposed method integrates the Automated Water Extraction Index (AWEIsh) and Modified Normalized Difference Water Index (MNDWI) for initial water body extraction, followed by a comprehensive fusion of multi-source data—including Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), Normalized Difference Red-Edge Index (NDREI), Sentinel-2 B8/B9 spectral bands, and Digital Elevation Model (DEM). This strategy hierarchically mitigates vegetation shadows, topographic shadows, and artificial feature non-water targets. A temporal flood frequency algorithm is employed to restore cloud-occluded water bodies, complemented by morphological filtering to exclude non-target water features (e.g., rivers and canals). Experimental validation using high-resolution reference data demonstrates that the algorithm achieves an overall extraction accuracy exceeding 96% in Bijie City, effectively suppressing dark object interference (e.g., false positives due to topographic and anthropogenic features) while preserving water body boundary integrity. Compared with single-index methods (e.g., MNDWI), this method reduces false positive rates caused by building shadows and terrain shadows by 15–20%, and improves the IoU (Intersection over Union) by 6–13% in typical karst sub-regions. This research provides a universal technical framework for large-scale dynamic monitoring of lakes and reservoirs, particularly addressing the challenges of regional adaptability and cloud compositing in karst environments. Full article
Show Figures

Figure 1

26 pages, 18639 KB  
Article
Comparison of Two Miniaturized, Rectifiable Aerosol Photometers for Personal PM2.5 Monitoring in a Dusty Occupational Environment
by James D. Johnston, Scott C. Collingwood, James D. LeCheminant, Neil E. Peterson, Andrew J. South, Clifton B. Farnsworth, Ryan T. Chartier, Mary E. Thiel, Tanner P. Brown, Elisabeth S. Goss, Porter K. Jones, Seshananda Sanjel, Jayson R. Gifford and John D. Beard
Atmosphere 2025, 16(11), 1233; https://doi.org/10.3390/atmos16111233 - 25 Oct 2025
Viewed by 224
Abstract
Wearable, rectifiable aerosol photometers (WRAPs), instruments with combined nephelometer and on-board filter-based sampling capabilities, generally show strong correlations with reference instruments across a range of ambient and household PM2.5 concentrations. However, limited data exist on their performance when challenged by mixed aerosol [...] Read more.
Wearable, rectifiable aerosol photometers (WRAPs), instruments with combined nephelometer and on-board filter-based sampling capabilities, generally show strong correlations with reference instruments across a range of ambient and household PM2.5 concentrations. However, limited data exist on their performance when challenged by mixed aerosol exposures, such as those found in dusty occupational environments. Understanding how these instruments perform across a spectrum of environments is critical, as they are increasingly used in human health studies, including those in which concurrent PM2.5 and coarse dust exposures occur simultaneously. The authors collected co-located, ~24 h. breathing zone gravimetric and nephelometer PM2.5 measures using the MicroPEM v3.2A (RTI International) and the UPAS v2.1 PLUS (Access Sensor Technologies). Samples were collected from adult brick workers (n = 93) in Nepal during work and non-work activities. Median gravimetric/arithmetic mean (AM) PM2.5 concentrations for the MicroPEM and UPAS were 207.06 (interquartile range [IQR]: 216.24) and 737.74 (IQR: 1399.98) µg/m3, respectively (p < 0.0001), with a concordance correlation coefficient (CCC) of 0.26. The median stabilized inverse probability-weighted nephelometer PM2.5 concentrations, after gravimetric correction, for the MicroPEM and UPAS were 169.16 (IQR: 204.98) and 594.08 (IQR: 1001.00) µg/m3, respectively (p-value < 0.0001), with a CCC of 0.31. Digital microscope photos and electron micrographs of filters confirmed large particle breakthrough for both instruments. A possible explanation is that the miniaturized pre-separators were overwhelmed by high dust exposures. This study was unique in that it evaluated personal PM2.5 monitors in a high dust occupational environment using both gravimetric and nephelometer-based measures. Our findings suggest that WRAPs may substantially overestimate personal PM2.5 exposures in environments with concurrently high PM2.5 and coarse dust levels, likely due to large particle breakthrough. This overestimation may obscure associations between exposures and health outcomes. For personal PM2.5 monitoring in dusty environments, the authors recommend traditional pump and cyclone or impaction-based sampling methods in the interim while miniaturized pre-separators for WRAPs are designed and validated for use in high dust environments. Full article
(This article belongs to the Section Air Quality and Health)
Show Figures

Figure 1

25 pages, 1426 KB  
Article
A Novel DST-IV Efficient Parallel Implementation with Low Arithmetic Complexity
by Doru Florin Chiper and Dan Marius Dobrea
Electronics 2025, 14(21), 4137; https://doi.org/10.3390/electronics14214137 - 22 Oct 2025
Viewed by 160
Abstract
Discrete sine transform (DST) has numerous applications across various fields, including signal processing, image compression and coding, adaptive digital filtering, mathematics (such as partial differential equations or numerical solutions of differential equations), image reconstruction, and classification, among others. The primary disadvantage of DST [...] Read more.
Discrete sine transform (DST) has numerous applications across various fields, including signal processing, image compression and coding, adaptive digital filtering, mathematics (such as partial differential equations or numerical solutions of differential equations), image reconstruction, and classification, among others. The primary disadvantage of DST class algorithms (DST-I, DST-II, DST-III, and DST-IV) is their substantial computational complexity (O (N log N)) during implementation. This paper proposes an innovative decomposition and real-time implementation for the DST-IV. This decomposition facilitates the execution of the algorithm in four or eight sections operating concurrently. These algorithms, which encompass 4 and 8 sections, are primarily developed using a matrix factorization technique to decompose the DST-IV matrices. Consequently, the computational complexity and execution time of the developed algorithms are markedly reduced compared to the traditional implementation of DST-IV, resulting in significant time efficiency. The performance analysis conducted on three distinct Graphics Processing Unit (GPU) architectures indicates that a substantial speedup can be achieved. An average speedup ranging from 22.42 to 65.25 was observed, depending on the GPU architecture employed and the DST-IV implementation (with 4 or 8 sections). Full article
Show Figures

Figure 1

16 pages, 663 KB  
Article
SAIL-Y: A Socioeconomic and Gender-Aware Career Recommender System
by Enrique J. Delahoz-Domínguez and Raquel Hijón-Neira
Electronics 2025, 14(20), 4121; https://doi.org/10.3390/electronics14204121 - 21 Oct 2025
Viewed by 255
Abstract
This study presents SAIL-Y (Sailing Artificial Intelligence for Learning in Youth), a novel gender-focused recommender system designed to promote female participation in STEM careers through data-driven guidance. Drawing inspiration from the metaphor of an academic journey as a voyage, SAIL-Y functions as a [...] Read more.
This study presents SAIL-Y (Sailing Artificial Intelligence for Learning in Youth), a novel gender-focused recommender system designed to promote female participation in STEM careers through data-driven guidance. Drawing inspiration from the metaphor of an academic journey as a voyage, SAIL-Y functions as a digital compass—leveraging socioeconomic profiles and standardised test results (Saber 11, Colombia) to help students navigate career decisions in high-impact academic fields. SAIL-Y integrates multiple machine learning strategies, including collaborative filtering, bootstrapped data augmentation to rebalance gender representation, and socioeconomic-aware conditioning, to generate personalised and bias-controlled career recommendations. The system is explicitly designed to skew recommendations toward STEM disciplines for female students, countering systemic underrepresentation in these fields. Using a dataset of 332,933 Colombian students (2010–2021), we evaluate the performance of different recommendation architectures under the SAIL-Y framework. The results show that a gender-oriented recommender design increases the proportion of STEM career recommendations for female students by up to 25% compared to reference models. Beyond technical contributions, this work proposes an ethically aligned paradigm for educational recommender systems—one that empowers rather than merely predicts. SAIL-Y is thus envisioned as both a methodological tool and a socio-educational intervention, supporting more equitable academic journeys for future generations. Full article
Show Figures

Graphical abstract

21 pages, 4777 KB  
Article
Processing the Sensor Signal in a PI Control System Using an Adaptive Filter Based on Fuzzy Logic
by Jarosław Joostberens, Aurelia Rybak and Aleksandra Rybak
Symmetry 2025, 17(10), 1774; https://doi.org/10.3390/sym17101774 - 21 Oct 2025
Viewed by 185
Abstract
This paper presents an adaptive fuzzy filter applied to processing a signal from a voltage sensor fed to the input of an object in an automatic temperature control system with a PI controller. (1) The research goal was to develop an algorithm for [...] Read more.
This paper presents an adaptive fuzzy filter applied to processing a signal from a voltage sensor fed to the input of an object in an automatic temperature control system with a PI controller. (1) The research goal was to develop an algorithm for processing the signal from an RMS voltage sensor, measured at the terminals of a heating element in a temperature control system with a PI controller, in a way that ensures good dynamic properties while maintaining an appropriate level of accuracy. (2) The paper presents a method for designing an adaptive fuzzy filter by synthesizing a first-order low-pass infinite impulse response (IIR) filter and a fuzzy model of the dependence of this filter parameter value on the modulus of the derivative of the measured quantity. The application of a model with a symmetric input and output structure and a modified fuzzy model with asymmetry resulting from the uneven distribution of modal values of singleton fuzzy sets at the output was shown. The innovation in the proposed solution is the use of a signal from a PI controller to determine the derivative module of the measured quantity and, using a fuzzy model, linking its instantaneous value with a digital filter parameter in the measurement chain with a sensor monitoring the signal at the input of the controlled object. It is demonstrated that the signal generated by the PI controller can be used in a control system to continuously determine the modulus of the time derivative of the signal measured at the input of the controlled object, also indicating the limitations of this method. The signal from the PI controller can also be used to select filter parameters. In such a situation, it can be treated as a reference signal representing the useful signal. The mean square error (MSE) was adopted as the criterion for matching the signal at the filter output to the reference signal. (3) Based on a comparative analysis of the results of using an adaptive fuzzy filter with a classic first-order IIR filter with an optimal parameter in the MSE sense, it was found that using a fuzzy filter yields better results, regardless of the structure of the fuzzy model used (symmetric or asymmetric). (4) The paper demonstrates that in the tested temperature control system, introducing a simple fuzzy model with one input characterized by three fuzzy sets, relating the modulus of the derivative of the signal developed by the PI controller to the value of the first-order IIR filter parameter, into the voltage sensor signal-processing algorithm gave significantly better results than using a first-order IIR filter with a constant optimal parameter in terms of MSE. The best results were obtained using a fuzzy model in which an intentional asymmetry in the modal values of the output fuzzy sets was introduced. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Fuzzy Control)
Show Figures

Figure 1

18 pages, 3666 KB  
Article
Reinforcement Learning Enabled Intelligent Process Monitoring and Control of Wire Arc Additive Manufacturing
by Allen Love, Saeed Behseresht and Young Ho Park
J. Manuf. Mater. Process. 2025, 9(10), 340; https://doi.org/10.3390/jmmp9100340 - 18 Oct 2025
Viewed by 405
Abstract
Wire Arc Additive Manufacturing (WAAM) has been recognized as an efficient and cost-effective metal additive manufacturing technique due to its high deposition rate and scalability for large components. However, the quality and repeatability of WAAM parts are highly sensitive to process parameters such [...] Read more.
Wire Arc Additive Manufacturing (WAAM) has been recognized as an efficient and cost-effective metal additive manufacturing technique due to its high deposition rate and scalability for large components. However, the quality and repeatability of WAAM parts are highly sensitive to process parameters such as arc voltage, current, wire feed rate, and torch travel speed, requiring advanced monitoring and adaptive control strategies. In this study, a vision-based monitoring system integrated with a reinforcement learning framework was developed to enable intelligent in situ control of WAAM. A custom optical assembly employing mirrors and a bandpass filter allowed simultaneous top and side views of the melt pool, enabling real-time measurement of layer height and width. These geometric features provide feedback to a tabular Q-learning algorithm, which adaptively adjusts voltage and wire feed rate through direct hardware-level control of stepper motors. Experimental validation across multiple builds with varying initial conditions demonstrated that the RL controller stabilized layer geometry, autonomously recovered from process disturbances, and maintained bounded oscillations around target values. While systematic offsets between digital measurements and physical dimensions highlight calibration challenges inherent to vision-based systems, the controller consistently prevented uncontrolled drift and corrected large deviations in deposition quality. The computational efficiency of tabular Q-learning enabled real-time operation on standard hardware without specialized equipment, demonstrating an accessible approach to intelligent process control. These results establish the feasibility of reinforcement learning as a robust, data-efficient control technique for WAAM, capable of real-time adaptation with minimal prior process knowledge. With improved calibration methods and expanded multi-physics sensing, this framework can advance toward precise geometric accuracy and support broader adoption of machine learning-based process monitoring and control in metal additive manufacturing. Full article
Show Figures

Figure 1

32 pages, 1067 KB  
Article
BMIT: A Blockchain-Based Medical Insurance Transaction System
by Jun Fei and Li Ling
Appl. Sci. 2025, 15(20), 11143; https://doi.org/10.3390/app152011143 - 17 Oct 2025
Viewed by 291
Abstract
The Blockchain-Based Medical Insurance Transaction System (BMIT) developed in this study addresses key issues in traditional medical insurance—information silos, data tampering, and privacy breaches—through innovative blockchain architectural design and technical infrastructure reconstruction. Built on a consortium blockchain architecture with FISCO BCOS (Financial Blockchain [...] Read more.
The Blockchain-Based Medical Insurance Transaction System (BMIT) developed in this study addresses key issues in traditional medical insurance—information silos, data tampering, and privacy breaches—through innovative blockchain architectural design and technical infrastructure reconstruction. Built on a consortium blockchain architecture with FISCO BCOS (Financial Blockchain Shenzhen Consortium Blockchain Open Source Platform) as the underlying platform, the system leverages FISCO BCOS’s distributed ledger, granular access control, and efficient consensus algorithms to enable multi-stakeholder on-chain collaboration. Four node roles and data protocols are defined: hospitals (on-chain data providers) generate 3D coordinate hashes of medical data via an algorithmically enhanced Bloom Filter for on-chain certification; patients control data access via blockchain private keys and unique parameters; insurance companies verify eligibility/claims using on-chain Bloom filters; the blockchain network stores encrypted key data (public keys, Bloom filter coordinates, and timestamps) to ensure immutability and traceability. A 3D-enhanced Bloom filter—tailored for on-chain use with user-specific hash functions and key control—stores only 3D coordinates (not raw data), cutting storage costs for 100 records to 1.27 KB and reducing the error rate to near zero (1.77% lower than traditional schemes for 10,000 entries). Three core smart contracts (identity registration, medical information certification, and automated verification) enable the automation of on-chain processes. Performance tests conducted on a 4-node consortium chain indicate a transaction throughput of 736 TPS (Transactions Per Second) and a per-operation latency of 181.7 ms, which meets the requirements of large-scale commercial applications. BMIT’s three-layer design (“underlying blockchain + enhanced Bloom filter + smart contracts”) delivers a balanced, efficient blockchain medical insurance prototype, offering a reusable technical framework for industry digital transformation. Full article
Show Figures

Figure 1

19 pages, 1765 KB  
Article
Reference High-Voltage Sensing Chain for the Assessment of Class 0.1-WB3 Instrument Transformers in the Frequency Range up to 150 kHz According to IEC 61869
by Mohamed Agazar, Claudio Iodice and Mario Luiso
Sensors 2025, 25(20), 6416; https://doi.org/10.3390/s25206416 - 17 Oct 2025
Viewed by 205
Abstract
This paper presents the development and characterization of a reference high-voltage sensing chain for the calibration and conformity assessment of instrument transformers with Class 0.1-WB3, in the extended frequency range up to 150 kHz, according to IEC 61869. The sensing chain, composed of [...] Read more.
This paper presents the development and characterization of a reference high-voltage sensing chain for the calibration and conformity assessment of instrument transformers with Class 0.1-WB3, in the extended frequency range up to 150 kHz, according to IEC 61869. The sensing chain, composed of a high-voltage divider, precision attenuators and high-pass filters, has been specifically developed and characterized. The chain features two parallel measurement paths: the first path, comprising the high-voltage divider and attenuator, is optimized for measuring the fundamental frequency superimposed with high-amplitude harmonics; the second path, consisting of the high-voltage divider followed by a high-pass filter, is dedicated to measuring very-low-level superimposed harmonic components by enhancing the signal-to-noise ratio. These two paths are integrated with a digitizer to form a complete and modular measurement chain. The expanded uncertainty of measurement has been thoroughly evaluated and confirms the chain’s ability to support assessment of instrument transformers with Class 0.1-WB3 compliance. Additionally, the chain architecture enables a future extension up to 500 kHz, addressing the growing need to evaluate instrument transformers under high-frequency power quality disturbances and improving the sensing capability in this field. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Viewed by 442
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

26 pages, 19488 KB  
Article
A Joint Method on Dynamic States Estimation for Digital Twin of Floating Offshore Wind Turbines
by Hao Xie, Ling Wan, Fan Shi, Jianjian Xin, Hu Zhou, Ben He, Chao Jin and Constantine Michailides
J. Mar. Sci. Eng. 2025, 13(10), 1981; https://doi.org/10.3390/jmse13101981 - 16 Oct 2025
Viewed by 239
Abstract
Dynamic state estimation of floating offshore wind turbines (FOWTs) in complex marine environments is a core challenge for digital twin systems. This study proposes a joint estimation framework that integrates windowed dynamic mode decomposition (W-DMD) and an adaptive strong tracking Kalman filter (ASTKF). [...] Read more.
Dynamic state estimation of floating offshore wind turbines (FOWTs) in complex marine environments is a core challenge for digital twin systems. This study proposes a joint estimation framework that integrates windowed dynamic mode decomposition (W-DMD) and an adaptive strong tracking Kalman filter (ASTKF). W-DMD extracts dominant modes under stochastic excitations through a sliding-window strategy and constructs an interpretable reduced-order state-space model. ASTKF is then employed to enhance estimation robustness against environmental uncertainties and noise. The framework is validated through numerical simulations under turbulent wind and wave conditions, demonstrating high estimation accuracy and strong robustness against sudden environmental disturbances. The results indicate that the proposed method provides a computationally efficient and interpretable tool for FOWT digital twins, laying the foundation for predictive maintenance and optimal control. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop