Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (375)

Search Parameters:
Keywords = FAIR datasets

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3081 KB  
Article
Temporal and Statistical Insights into Multivariate Time Series Forecasting of Corn Outlet Moisture in Industrial Continuous-Flow Drying Systems
by Marko Simonič and Simon Klančnik
Appl. Sci. 2025, 15(16), 9187; https://doi.org/10.3390/app15169187 - 21 Aug 2025
Viewed by 171
Abstract
Corn drying is a critical post-harvest process to ensure product quality and compliance with moisture standards. Traditional optimization approaches often overlook dynamic interactions between operational parameters and environmental factors in industrial continuous flow drying systems. This study integrates statistical analysis and deep learning [...] Read more.
Corn drying is a critical post-harvest process to ensure product quality and compliance with moisture standards. Traditional optimization approaches often overlook dynamic interactions between operational parameters and environmental factors in industrial continuous flow drying systems. This study integrates statistical analysis and deep learning to predict outlet moisture content, leveraging a dataset of 3826 observations from an operational dryer. The effects of inlet moisture, target air temperature, and material discharge interval on thermal behavior of the system were evaluated through linear regression and t-test, which provided interpretable insights into process dependencies. Three neural network architectures (LSTM, GRU, and TCN) were benchmarked for multivariate time-series forecasting of outlet corn moisture, with hyperparameters optimized using grid search to ensure fair performance comparison. Results demonstrated GRU’s superior performance in the context of absolute deviations, achieving the lowest mean absolute error (MAE = 0.304%) and competitive mean squared error (MSE = 0.304%), compared to LSTM (MAE = 0.368%, MSE = 0.291%) and TCN (MAE = 0.397%, MSE = 0.315%). While GRU excelled in average prediction accuracy, LSTM’s lower MSE highlighted its robustness against extreme deviations. The hybrid methodology bridges statistical insights for interpretability with deep learning’s dynamic predictive capabilities, offering a scalable framework for real-time process optimization. By combining traditional analytical methods (e.g., regression and t-test) with deep learning-driven forecasting, this work advances intelligent monitoring and control of industrial drying systems, enhancing process stability, ensuring compliance with moisture standards, and indirectly supporting energy efficiency by reducing over drying and enabling more consistent operation. Full article
Show Figures

Figure 1

27 pages, 4153 KB  
Article
Mitigating Context Bias in Vision–Language Models via Multimodal Emotion Recognition
by Constantin-Bogdan Popescu, Laura Florea and Corneliu Florea
Electronics 2025, 14(16), 3311; https://doi.org/10.3390/electronics14163311 - 20 Aug 2025
Viewed by 279
Abstract
Vision–Language Models (VLMs) have become key contributors to the state of the art in contextual emotion recognition, demonstrating a superior ability to understand the relationship between context, facial expressions, and interactions in images compared to traditional approaches. However, their reliance on contextual cues [...] Read more.
Vision–Language Models (VLMs) have become key contributors to the state of the art in contextual emotion recognition, demonstrating a superior ability to understand the relationship between context, facial expressions, and interactions in images compared to traditional approaches. However, their reliance on contextual cues can introduce unintended biases, especially when the background does not align with the individual’s true emotional state. This raises concerns for the reliability of such models in real-world applications, where robustness and fairness are critical. In this work, we explore the limitations of current VLMs in emotionally ambiguous scenarios and propose a method to overcome contextual bias. Existing VLM-based captioning solutions tend to overweight background and contextual information when determining emotion, often at the expense of the individual’s actual expression. To study this phenomenon, we created synthetic datasets by automatically extracting people from the original images using YOLOv8 and placing them on randomly selected backgrounds from the Landscape Pictures dataset. This allowed us to reduce the correlation between emotional expression and background context while preserving body pose. Through discriminative analysis of VLM behavior on images with both correct and mismatched backgrounds, we find that in 93% of the cases, the predicted emotions vary based on the background—even when models are explicitly instructed to focus on the person. To address this, we propose a multimodal approach (named BECKI) that incorporates body pose, full image context, and a novel description stream focused exclusively on identifying the emotional discrepancy between the individual and the background. Our primary contribution is not just in identifying the weaknesses of existing VLMs, but in proposing a more robust and context-resilient solution. Our method achieves up to 96% accuracy, highlighting its effectiveness in mitigating contextual bias. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2544 KB  
Article
Towards Fair Graph Neural Networks via Counterfactual and Balance
by Zhiguo Xiao, Yangfan Zhou, Dongni Li and Ke Wang
Information 2025, 16(8), 704; https://doi.org/10.3390/info16080704 - 19 Aug 2025
Viewed by 332
Abstract
In recent years, graph neural networks (GNNs) have shown powerful performance in processing non-Euclidean data. However, similar to other machine-learning algorithms, GNNs can amplify data bias in high-risk decision-making systems, which can easily lead to unfairness in the final decision-making results. At present, [...] Read more.
In recent years, graph neural networks (GNNs) have shown powerful performance in processing non-Euclidean data. However, similar to other machine-learning algorithms, GNNs can amplify data bias in high-risk decision-making systems, which can easily lead to unfairness in the final decision-making results. At present, a large number of studies focus on solving the fairness problem of GNNs, but the existing methods mostly rely on building complex model architectures or rely on technical means in the field of non-GNNs. To this end, this paper proposes FairCNCB (Fair Graph Neural Network based on Counterfactual and Category Balance) to address the problem of class imbalancing in minority sensitive attribute groups. First, we conduct a causal analysis of fair representation and employ the adversarial network to generate counterfactual node samples, effectively mitigating bias induced by sensitive attributes. Secondly, we calculate the weights for minority sensitive attribute groups, and reconstruct the loss function to achieve the fairness of sensitive attribute classes among different groups. The synergy between the two modules optimizes GNNs from multiple dimensions and significantly improves the performance of GNNs in terms of fairness. The experimental results on the three datasets show the effectiveness and fairness of FairCNCB. The performance metrics (such as AUC, F1, and ACC) have been improved by approximately 2%, and the fairness metrics (△sp, △eo) have been enhanced by approximately 5%. Full article
Show Figures

Figure 1

28 pages, 2383 KB  
Article
CIM-LP: A Credibility-Aware Incentive Mechanism Based on Long Short-Term Memory and Proximal Policy Optimization for Mobile Crowdsensing
by Sijia Mu and Huahong Ma
Electronics 2025, 14(16), 3233; https://doi.org/10.3390/electronics14163233 - 14 Aug 2025
Viewed by 185
Abstract
In the field of mobile crowdsensing (MCS), a large number of tasks rely on the participation of ordinary mobile device users for data collection and processing. This model has shown great potential for applications in environmental monitoring, traffic management, public safety, and other [...] Read more.
In the field of mobile crowdsensing (MCS), a large number of tasks rely on the participation of ordinary mobile device users for data collection and processing. This model has shown great potential for applications in environmental monitoring, traffic management, public safety, and other areas. However, the enthusiasm of participants and the quality of uploaded data directly affect the reliability and practical value of the sensing results. Therefore, the design of incentive mechanisms has become a core issue in driving the healthy operation of MCS. The existing research, when optimizing long-term utility rewards for participants, has often failed to fully consider dynamic changes in trustworthiness. It has typically relied on historical data from a single point in time, overlooking the long-term dependencies in the time series, which results in suboptimal decision-making and limits the overall efficiency and fairness of sensing tasks. To address this issue, a credibility-aware incentive mechanism based on long short-term memory and proximal policy optimization (CIM-LP) is proposed. The mechanism employs a Markov decision process (MDP) model to describe the decision-making process of the participants. Without access to global information, an incentive model combining long short-term memory (LSTM) networks and proximal policy optimization (PPO), collectively referred to as LSTM-PPO, is utilized to formulate the most reasonable and effective sensing duration strategy for each participant, aiming to maximize the utility reward. After task completion, the participants’ credibility is dynamically updated by evaluating the quality of the uploaded data, which then adjusts their utility rewards for the next phase. Simulation results based on real datasets show that compared with several existing incentive algorithms, the CIM-LP mechanism increases the average utility of the participants by 6.56% to 112.76% and the task completion rate by 16.25% to 128.71%, demonstrating its significant advantages in improving data quality and task completion efficiency. Full article
Show Figures

Figure 1

22 pages, 3139 KB  
Article
A Counterfactual Fine-Grained Aircraft Classification Network for Remote Sensing Images Based on Normalized Coordinate Attention
by Zeya Zhao, Wenyin Tuo, Shuai Zhang and Xinbo Zhao
Appl. Sci. 2025, 15(16), 8903; https://doi.org/10.3390/app15168903 - 12 Aug 2025
Viewed by 270
Abstract
Fine-grained aircraft classification in remote sensing is a critical task within the field of remote sensing image processing, aiming to precisely distinguish between different types of aircraft in aerial images. Due to the high visual similarity among aircraft targets in remote sensing images, [...] Read more.
Fine-grained aircraft classification in remote sensing is a critical task within the field of remote sensing image processing, aiming to precisely distinguish between different types of aircraft in aerial images. Due to the high visual similarity among aircraft targets in remote sensing images, accurately capturing subtle and discriminative features becomes a key technical challenge for fine-grained aircraft classification. In this context, we propose a Normalized Coordinate Attention-Based Counterfactual Classification Network (NCC-Net), which emphasizes the spatial positional information of aircraft targets and effectively captures long-range dependencies, thereby enabling precise localization of various aircraft components. Furthermore, we analyze the proposed network from a causal perspective, encouraging the model to focus on key discriminative features of the aircraft while minimizing distraction from the surrounding environment and background. Experimental results on three benchmark datasets demonstrate the superiority of our method. Specifically, NCC-Net achieves Top-1 classification accuracies of 97.7% on FAIR1M, 95.2% on MTARSI2, and 98.4% on ARSI120, outperforming several state-of-the-art methods. These results highlight the effectiveness and generalizability of our proposed method for fine-grained remote sensing target recognition. Full article
Show Figures

Figure 1

20 pages, 3389 KB  
Article
A Reputation-Aware Defense Framework for Strategic Behaviors in Federated Learning
by Yixuan Cai, Jianbo Xu, Zhuotao Lian, Kei Chi Wing Brian, Yuxing Li and Jiantao Xu
Telecom 2025, 6(3), 60; https://doi.org/10.3390/telecom6030060 - 11 Aug 2025
Viewed by 301
Abstract
Federated Learning (FL) enables privacy-preserving model training across distributed clients. However, its reliance on voluntary client participation makes it vulnerable to strategic behaviors—actions that are not overtly malicious but significantly impair model convergence and fairness. Existing defense methods primarily focus on explicit attacks, [...] Read more.
Federated Learning (FL) enables privacy-preserving model training across distributed clients. However, its reliance on voluntary client participation makes it vulnerable to strategic behaviors—actions that are not overtly malicious but significantly impair model convergence and fairness. Existing defense methods primarily focus on explicit attacks, overlooking the challenges posed by economically motivated “pseudo-honest” clients. To address this gap, we propose a Reputation-Aware Defense Framework to mitigate strategic behaviors in FL. This framework introduces a multi-dimensional dynamic reputation model that evaluates client behaviors based on gradient alignment, participation consistency, and update stability. The resulting reputation scores are incorporated into both aggregation and incentive mechanisms, forming a behavior-feedback loop that rewards honest participation and penalizes opportunistic strategies. We theoretically prove the convergence of reputation scores, the suppression of low-quality updates in aggregation, and the emergence of honest participation as a Nash equilibrium under the incentive mechanism. Experiments on datasets such as CIFAR-10, FEMNIST, MIMIC-III demonstrate that our approach significantly outperforms baseline methods in accuracy, fairness, and robustness, even when up to 60% of clients act strategically. This study bridges trust modeling and robust optimization in FL, offering a secure foundation for federated systems operating in open and incentive-driven environments. Full article
Show Figures

Figure 1

23 pages, 8286 KB  
Article
Context-Guided SAR Ship Detection with Prototype-Based Model Pretraining and Check–Balance-Based Decision Fusion
by Haowen Zhou, Zhe Geng, Minjie Sun, Linyi Wu and He Yan
Sensors 2025, 25(16), 4938; https://doi.org/10.3390/s25164938 - 10 Aug 2025
Viewed by 360
Abstract
To address the challenging problem of multi-scale inshore–offshore ship detection in synthetic aperture radar (SAR) remote sensing images, we propose a novel deep learning-based automatic ship detection method within the framework of compositional learning. The proposed method is supported by three pillars: context-guided [...] Read more.
To address the challenging problem of multi-scale inshore–offshore ship detection in synthetic aperture radar (SAR) remote sensing images, we propose a novel deep learning-based automatic ship detection method within the framework of compositional learning. The proposed method is supported by three pillars: context-guided region proposal, prototype-based model-pretraining, and multi-model ensemble learning. To reduce the false alarms induced by the discrete ground clutters, the prior knowledge of the harbour’s layout is exploited to generate land masks for terrain delimitation. To prepare the model for the diverse ship targets of different sizes and orientations it might encounter in the test environment, a novel cross-dataset model pretraining strategy is devised, where the SAR images of several key ship target prototypes from the auxiliary dataset are used to support class-incremental learning. To combine the advantages of diverse model architectures, an adaptive decision-level fusion framework is proposed, which consists of three components: a dynamic confidence threshold assignment strategy based on the sizes of targets, a weighted fusion mechanism based on president-senate check–balance, and Soft-NMS-based Dense Group Target Bounding Box Fusion (Soft-NMS-DGT-BBF). The performance enhancement brought by contextual knowledge-aided terrain delimitation, cross-dataset prototype-based model pretraining and check–balance-based adaptive decision-level fusion are validated with a series of ingeniously devised experiments based on the FAIR-CSAR-Ship dataset. Full article
(This article belongs to the Special Issue SAR Imaging Technologies and Applications)
Show Figures

Figure 1

21 pages, 655 KB  
Article
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
by Albin Uruqi, Iosif Viktoratos and Athanasios Tsadiras
Future Internet 2025, 17(8), 360; https://doi.org/10.3390/fi17080360 - 8 Aug 2025
Viewed by 577
Abstract
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, [...] Read more.
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, transparency, and user experience throughout the entire advertising pipeline. The proposed approach begins with transformer-enhanced feature extraction, leveraging self-attention and learned positional encodings to capture deep semantic relationships among users, ads, and context. It then employs an ensemble integration strategy combining enhanced state-of-the-art models with optimized aggregation for robust prediction. Finally, an LLM-driven enhancement module performs semantic reranking, personalized message refinement, and natural language explanation generation while also addressing cold-start scenarios through pre-trained knowledge. The LLM component further supports diversification, fairness-aware ranking, and sentiment sensitivity in order to ensure more relevant, diverse, and ethically grounded recommendations. Extensive experiments on DigiX and Avazu datasets demonstrate notable gains in click-through rate prediction (CTR), while an in-depth real user evaluation showcases improvements in perceived ad relevance, message quality, transparency, and trust. This work advances the state-of-the-art by combining CTR models with interpretability and contextual reasoning. The strengths of the proposed method, such as its innovative integration of components, empirical validation, multifaceted LLM application, and ethical alignment highlight its potential as a robust, future-ready solution for personalized advertising. Full article
(This article belongs to the Special Issue Information Networks with Human-Centric LLMs)
Show Figures

Figure 1

23 pages, 2640 KB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Viewed by 349
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

25 pages, 5488 KB  
Article
Biased by Design? Evaluating Bias and Behavioral Diversity in LLM Annotation of Real-World and Synthetic Hotel Reviews
by Maria C. Voutsa, Nicolas Tsapatsoulis and Constantinos Djouvas
AI 2025, 6(8), 178; https://doi.org/10.3390/ai6080178 - 4 Aug 2025
Viewed by 729
Abstract
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by [...] Read more.
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by comparing human and AI-generated annotation labels across sentiment, topic, and aspect dimensions in hotel booking reviews. Using the HRAST dataset, which includes 23,114 real user-generated review sentences and a synthetically generated corpus of 2000 LLM-authored sentences, we evaluate inter-annotator agreement between a human expert and three LLMs (ChatGPT-3.5, ChatGPT-4, and ChatGPT-4-mini) as a proxy for assessing annotation bias. Our findings show high agreement among LLMs, especially on synthetic data, but only moderate to fair alignment with human annotations, particularly in sentiment and aspect-based sentiment analysis. LLMs display a pronounced neutrality bias, often defaulting to neutral sentiment in ambiguous cases. Moreover, annotation behavior varies notably with task design, as manual, one-to-one prompting produces higher agreement with human labels than automated batch processing. The study identifies three distinct AI biases—repetition bias, behavioral bias, and neutrality bias—that shape annotation outcomes. These findings highlight how dataset complexity and annotation mode influence LLM behavior, offering important theoretical, methodological, and practical implications for AI-assisted annotation and synthetic content generation. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

16 pages, 2412 KB  
Article
Measuring Equitable Prosperity in the EU-27: Introducing the IDDO, a Composite Index of Growth and Income Inequality (2005–2024)
by Narcis Eduard Mitu and George Teodor Mitu
World 2025, 6(3), 103; https://doi.org/10.3390/world6030103 - 1 Aug 2025
Viewed by 830
Abstract
This article introduces the Index of Distributive and Developmental Outlook (IDDO), a composite indicator designed to jointly assess economic performance and income inequality across EU-27 Member States. While GDP per capita is widely used to evaluate national prosperity, and the Gini coefficient captures [...] Read more.
This article introduces the Index of Distributive and Developmental Outlook (IDDO), a composite indicator designed to jointly assess economic performance and income inequality across EU-27 Member States. While GDP per capita is widely used to evaluate national prosperity, and the Gini coefficient captures income distribution, their separate use often obscures the interaction between growth and equity—an essential dimension of sustainable development. To address this gap, the IDDO integrates normalized values of both indicators using arithmetic and geometric means. The study applies the IDDO to a longitudinal dataset covering the years 2005, 2014, and 2024, allowing for comparative and temporal analysis. Based on IDDO scores, countries are classified into four development types: balanced development, growth with inequality, equity with stagnation, and dual vulnerability. Results show that while some Member States, such as Luxembourg, Czechia, and Slovenia, maintain consistently high IDDO levels, others—including Bulgaria, Romania, and Latvia—exhibit persistent challenges in aligning growth with equitable outcomes. The findings underscore the need for cohesion policies that prioritize not only economic convergence but also distributive fairness. The IDDO provides a practical and adaptable tool for diagnosing development patterns, benchmarking performance, and informing policy design within the EU framework. Full article
Show Figures

Figure 1

22 pages, 2909 KB  
Article
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks
by Amarudin Daulay, Kalamullah Ramli, Ruki Harwahyu, Taufik Hidayat and Bernardi Pranggono
Mathematics 2025, 13(15), 2471; https://doi.org/10.3390/math13152471 - 31 Jul 2025
Viewed by 528
Abstract
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient [...] Read more.
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient FL framework integrating contrastive graph representation learning for enhanced feature discrimination, a Jain-index-based fairness-aware aggregation mechanism, an adaptive synchronization scheduler to optimize communication rounds, and secure aggregation via homomorphic encryption within a Trusted Execution Environment. We evaluate FedGCL on four benchmark malware datasets (Drebin, Malgenome, Kronodroid, and TUANDROMD) using 5 to 15 graph neural network clients over 20 communication rounds. Our experiments demonstrate that FedGCL achieves 96.3% global accuracy within three rounds and converges to 98.9% by round twenty—reducing required training rounds by 45% compared to FedAvg—while incurring only approximately 10% additional computational overhead. By preserving patient data privacy at the edge, FedGCL enhances system resilience without sacrificing model performance. These results indicate FedGCL’s promise as a secure, efficient, and fair federated malware detection solution for IoMT ecosystems. Full article
Show Figures

Figure 1

25 pages, 1319 KB  
Article
Beyond Performance: Explaining and Ensuring Fairness in Student Academic Performance Prediction with Machine Learning
by Kadir Kesgin, Salih Kiraz, Selahattin Kosunalp and Bozhana Stoycheva
Appl. Sci. 2025, 15(15), 8409; https://doi.org/10.3390/app15158409 - 29 Jul 2025
Viewed by 566
Abstract
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust [...] Read more.
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust model training. A comprehensive fairness analysis is conducted, considering sensitive attributes such as gender, school type, and socioeconomic factors, including parental education (Medu and Fedu), cohabitation status (Pstatus), and family size (famsize). Using the AIF360 library, we compute the demographic parity difference (DP) and Equalized Odds Difference (EO) to assess model biases across diverse subgroups. Our results demonstrate that XGBoost achieves high predictive performance (accuracy: 0.789; F1 score: 0.803) while maintaining low bias for socioeconomic attributes, offering a balanced approach to fairness and performance. A sensitivity analysis of bias mitigation strategies further enhances the study, advancing equitable artificial intelligence in education by incorporating socially relevant factors. Full article
(This article belongs to the Special Issue Challenges and Trends in Technology-Enhanced Learning)
Show Figures

Figure 1

22 pages, 3267 KB  
Article
Identifying Deformation Drivers in Dam Segments Using Combined X- and C-Band PS Time Series
by Jonas Ziemer, Jannik Jänichen, Gideon Stein, Natascha Liedel, Carolin Wicker, Katja Last, Joachim Denzler, Christiane Schmullius, Maha Shadaydeh and Clémence Dubois
Remote Sens. 2025, 17(15), 2629; https://doi.org/10.3390/rs17152629 - 29 Jul 2025
Viewed by 374
Abstract
Dams play a vital role in securing water and electricity supplies for households and industry, and they contribute significantly to flood protection. Regular monitoring of dam deformations holds fundamental socio-economic and ecological importance. Traditionally, this has relied on time-consuming in situ techniques that [...] Read more.
Dams play a vital role in securing water and electricity supplies for households and industry, and they contribute significantly to flood protection. Regular monitoring of dam deformations holds fundamental socio-economic and ecological importance. Traditionally, this has relied on time-consuming in situ techniques that offer either high spatial or temporal resolution. Persistent Scatterer Interferometry (PSI) addresses these limitations, enabling high-resolution monitoring in both domains. Sensors such as TerraSAR-X (TSX) and Sentinel-1 (S-1) have proven effective for deformation analysis with millimeter accuracy. Combining TSX and S-1 datasets enhances monitoring capabilities by leveraging the high spatial resolution of TSX with the broad coverage of S-1. This improves monitoring by increasing PS point density, reducing revisit intervals, and facilitating the detection of environmental deformation drivers. This study aims to investigate two objectives: first, we evaluate the benefits of a spatially and temporally densified PS time series derived from TSX and S-1 data for detecting radial deformations in individual dam segments. To support this, we developed the TSX2StaMPS toolbox, integrated into the updated snap2stamps workflow for generating single-master interferogram stacks using TSX data. Second, we identify deformation drivers using water level and temperature as exogenous variables. The five-year study period (2017–2022) was conducted on a gravity dam in North Rhine-Westphalia, Germany, which was divided into logically connected segments. The results were compared to in situ data obtained from pendulum measurements. Linear models demonstrated a fair agreement between the combined time series and the pendulum data (R2 = 0.5; MAE = 2.3 mm). Temperature was identified as the primary long-term driver of periodic deformations of the gravity dam. Following the filling of the reservoir, the variance in the PS data increased from 0.9 mm to 3.9 mm in RMSE, suggesting that water level changes are more responsible for short-term variations in the SAR signal. Upon full impoundment, the mean deformation amplitude decreased by approximately 1.7 mm toward the downstream side of the dam, which was attributed to the higher water pressure. The last five meters of water level rise resulted in higher feature importance due to interaction effects with temperature. The study concludes that integrating multiple PS datasets for dam monitoring is beneficial particularly for dams where few PS points can be identified using one sensor or where pendulum systems are not installed. Identifying the drivers of deformation is feasible and can be incorporated into existing monitoring frameworks. Full article
(This article belongs to the Special Issue Dam Stability Monitoring with Satellite Geodesy II)
Show Figures

Figure 1

27 pages, 3211 KB  
Article
Hybrid Deep Learning-Reinforcement Learning for Adaptive Human-Robot Task Allocation in Industry 5.0
by Claudio Urrea
Systems 2025, 13(8), 631; https://doi.org/10.3390/systems13080631 - 26 Jul 2025
Viewed by 801
Abstract
Human-Robot Collaboration (HRC) is pivotal for flexible, worker-centric manufacturing in Industry 5.0, yet dynamic task allocation remains difficult because operator states—fatigue and skill—fluctuate abruptly. I address this gap with a hybrid framework that couples real-time perception and double-estimating reinforcement learning. A Convolutional Neural [...] Read more.
Human-Robot Collaboration (HRC) is pivotal for flexible, worker-centric manufacturing in Industry 5.0, yet dynamic task allocation remains difficult because operator states—fatigue and skill—fluctuate abruptly. I address this gap with a hybrid framework that couples real-time perception and double-estimating reinforcement learning. A Convolutional Neural Network (CNN) classifies nine fatigue–skill combinations from synthetic physiological cues (heart-rate, blink rate, posture, wrist acceleration); its outputs feed a Double Deep Q-Network (DDQN) whose state vector also includes task-queue and robot-status features. The DDQN optimises a multi-objective reward balancing throughput, workload and safety and executes at 10 Hz within a closed-loop pipeline implemented in MATLAB R2025a and RoboDK v5.9. Benchmarking on a 1000-episode HRC dataset (2500 allocations·episode−1) shows the hybrid CNN+DDQN controller raises throughput to 60.48 ± 0.08 tasks·min−1 (+21% vs. rule-based, +12% vs. SARSA, +8% vs. Dueling DQN, +5% vs. PPO), trims operator fatigue by 7% and sustains 99.9% collision-free operation (one-way ANOVA, p < 0.05; post-hoc power 1 − β = 0.87). Visual analyses confirm responsive task reallocation as fatigue rises or skill varies. The approach outperforms strong baselines (PPO, A3C, Dueling DQN) by mitigating Q-value over-estimation through double learning, providing robust policies under stochastic human states and offering a reproducible blueprint for multi-robot, Industry 5.0 factories. Future work will validate the controller on a physical Doosan H2017 cell and incorporate fairness constraints to avoid workload bias across multiple operators. Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

Back to TopTop