Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = post-comparative ethics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 208 KB  
Article
Evaluating the Competence of AI Chatbots in Answering Patient-Oriented Frequently Asked Questions on Orthognathic Surgery
by Ezgi Yüceer-Çetiner, Dilara Kazan, Mobin Nesiri and Selçuk Basa
Healthcare 2025, 13(17), 2114; https://doi.org/10.3390/healthcare13172114 - 26 Aug 2025
Viewed by 280
Abstract
Objectives: This study aimed to evaluate the performance of three widely used artificial intelligence (AI) chatbots—ChatGPT-4, Gemini 2.5 Pro, and Claude Sonnet 4—in answering patient-oriented frequently asked questions (FAQs) related to orthognathic surgery. Given the increasing reliance on AI tools in healthcare, it [...] Read more.
Objectives: This study aimed to evaluate the performance of three widely used artificial intelligence (AI) chatbots—ChatGPT-4, Gemini 2.5 Pro, and Claude Sonnet 4—in answering patient-oriented frequently asked questions (FAQs) related to orthognathic surgery. Given the increasing reliance on AI tools in healthcare, it is essential to evaluate their performance to provide accurate, empathetic, readable, and clinically appropriate information. Methods: Twenty FAQs in Turkish about orthognathic surgery were presented to each chatbot. The responses were evaluated by three oral and maxillofacial surgeons using a modified Global Quality Score (GQS), binary clinical appropriateness judgment, and a five-point empathy rating scale. The evaluation process was conducted in a double-blind manner. The Ateşman Readability Formula was applied to each response using an automated Python-based script. Comparative statistical analyses—including ANOVA, Kruskal–Wallis, and post hoc tests—were used to determine significant differences in performance among chatbots. Results: Gemini outperformed both GPT-4 and Claude in GQS, empathy, and clinical appropriateness (p < 0.001). GPT-4 demonstrated the highest readability scores (p < 0.001) but frequently lacked empathetic tone and safety-oriented guidance. Claude showed moderate performance, balancing ethical caution with limited linguistic clarity. A moderate positive correlation was found between empathy and perceived response quality (r = 0.454; p = 0.044). Conclusions: AI chatbots vary significantly in their ability to support surgical patient education. While GPT-4 offers superior readability, Gemini provides the most balanced and clinically reliable responses. These findings underscore the importance of context-specific chatbot selection and continuous clinical oversight to ensure safe and ethical AI-driven communication. Full article
18 pages, 3219 KB  
Article
Designing Trustworthy AI Systems for PTSD Follow-Up
by María Cazares, Jorge Miño-Ayala, Iván Ortiz and Roberto Andrade
Technologies 2025, 13(8), 361; https://doi.org/10.3390/technologies13080361 - 15 Aug 2025
Viewed by 347
Abstract
Post-Traumatic Stress Disorder (PTSD) poses complex clinical challenges due to its emotional volatility, contextual sensitivity, and need for personalized care. Conventional AI systems often fall short in therapeutic contexts due to lack of explainability, ethical safeguards, and narrative understanding. We propose a hybrid [...] Read more.
Post-Traumatic Stress Disorder (PTSD) poses complex clinical challenges due to its emotional volatility, contextual sensitivity, and need for personalized care. Conventional AI systems often fall short in therapeutic contexts due to lack of explainability, ethical safeguards, and narrative understanding. We propose a hybrid neuro-symbolic architecture that combines Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), symbolic controllers, and ensemble classifiers to support clinicians in PTSD follow-up. The proposal integrates real-time anonymization, session memory through patient-specific RAG, and a Human-in-the-Loop (HITL) interface. It ensures clinical safety via symbolic logic rules derived from trauma-informed protocols. The proposed architecture enables safe, personalized AI-driven responses by combining statistical language modeling with explicit therapeutic constraints. Through modular integration, it supports affective signal adaptation, longitudinal memory, and ethical traceability. A comparative evaluation against state-of-the-art approaches highlights improvements in contextual alignment, privacy protection, and clinician supervision. Full article
(This article belongs to the Special Issue AI-Enabled Smart Healthcare Systems)
Show Figures

Figure 1

23 pages, 676 KB  
Review
Current Neuroethical Perspectives on Deep Brain Stimulation and Neuromodulation for Neuropsychiatric Disorders: A Scoping Review of the Past 10 Years
by Jonathan Shaw, Sagar Pyreddy, Colton Rosendahl, Charles Lai, Emily Ton and Rustin Carter
Diseases 2025, 13(8), 262; https://doi.org/10.3390/diseases13080262 - 14 Aug 2025
Viewed by 418
Abstract
Background: The use of neuromodulation for the treatment of psychiatric disorders has become increasingly common, but this emerging treatment modality comes with ethical concerns. This scoping review aims to synthesize the neuroethical discourse from the past 10 years on the use of neurotechnologies [...] Read more.
Background: The use of neuromodulation for the treatment of psychiatric disorders has become increasingly common, but this emerging treatment modality comes with ethical concerns. This scoping review aims to synthesize the neuroethical discourse from the past 10 years on the use of neurotechnologies for psychiatric conditions. Methods: A total of 4496 references were imported from PubMed, Embase, and Scopus. The inclusion criteria required a discussion of the neuroethics of neuromodulation and studies published between 2014 and 2024. Results: Of the 77 references, a majority discussed ethical concerns of patient autonomy and informed consent for neuromodulation, with neurotechnologies being increasingly seen as autonomy enablers. Concepts of changes in patient identity and personality, especially after deep brain stimulation, were also discussed extensively. The risks and benefits of neurotechnologies were also compared, with deep brain stimulation being seen as the riskiest but also possessing the highest efficacy. Concerns about equitable access and justice were raised regarding the rise of private transcranial magnetic stimulation clinics and the current experimental status of deep brain stimulation. Conclusions: Neuroethics discourse, particularly for deep brain stimulation, has continued to focus on how post-intervention changes in personality and behavior influence patient identity. Multiple conceptual frameworks have been proposed, though each faces critiques for addressing only parts of this complex phenomenon, prompting calls for pluralistic models. Emerging technologies, especially those involving artificial intelligence through brain computer interfaces, add new dimensions to this debate by raising concerns about neuroprivacy and legal responsibility for actions, further blurring the lines for defining personal identity. Full article
(This article belongs to the Section Neuro-psychiatric Disorders)
Show Figures

Figure 1

23 pages, 362 KB  
Article
Research on Sustainable Food Literacy Education Talent Cultivation
by Meng Lei Hu and Kuan Ting Chen
Sustainability 2025, 17(16), 7172; https://doi.org/10.3390/su17167172 - 8 Aug 2025
Viewed by 381
Abstract
This research aims to develop a model for cultivating talents in sustainable food literacy education in Taiwan. The project adopts the professional and theoretical axes of the food industry, sustainable development, and food literacy. The research employs a mixed-method approach, combining qualitative and [...] Read more.
This research aims to develop a model for cultivating talents in sustainable food literacy education in Taiwan. The project adopts the professional and theoretical axes of the food industry, sustainable development, and food literacy. The research employs a mixed-method approach, combining qualitative and quantitative techniques, to construct sustainable food literacy assessment indicators for Taiwan. In the first year, through literature analysis and qualitative research, the core content of “sustainable food literacy” in Taiwan was extracted, resulting in four major dimensions with 24 indicator items. Then, using the Fuzzy Delphi method, the indicators were constructed, defining the core content and dimension indicators of sustainable food literacy, which include “sustainable agriculture and production”, “healthy diet and culture”, “green environmental protection and consumption”, and “food social responsibility and ethics”, encompassing a total of 20 indicators. In the second year, based on the dimensions identified in the first year, a sustainable food literacy curriculum was developed. A 10-week quasi-experimental teaching curriculum was conducted for students enrolled in the “Vegetable and Fruit Carving” elective course in two classes of the Department of Food and Beverage Management at Jingwen University of Science and Technology. By comparing the pre-test and post-test scores of students’ sustainable food literacy and their sustainable food works, as well as analyzing student learning portfolios and teacher reflections, it was shown that the curriculum developed in this research significantly enhanced students’ sustainable food literacy and their performance. The results of this two-year study can be used for the assessment of sustainable food literacy talents in Taiwan, contributing both academically and practically. Full article
26 pages, 2260 KB  
Review
Transcatheter Aortic Valve Implantation in Cardiogenic Shock: Current Evidence, Clinical Challenges, and Future Directions
by Grigoris V. Karamasis, Christos Kourek, Dimitrios Alexopoulos and John Parissis
J. Clin. Med. 2025, 14(15), 5398; https://doi.org/10.3390/jcm14155398 - 31 Jul 2025
Viewed by 545
Abstract
Cardiogenic shock (CS) in the setting of severe aortic stenosis (AS) presents a critical and high-risk scenario with limited therapeutic options and poor prognosis. Transcatheter aortic valve implantation (TAVI), initially reserved for inoperable or high-risk surgical candidates, is increasingly being considered in patients [...] Read more.
Cardiogenic shock (CS) in the setting of severe aortic stenosis (AS) presents a critical and high-risk scenario with limited therapeutic options and poor prognosis. Transcatheter aortic valve implantation (TAVI), initially reserved for inoperable or high-risk surgical candidates, is increasingly being considered in patients with CS due to improvements in device technology, operator experience, and supportive care. This review synthesizes current evidence from large registries, observational studies, and meta-analyses that support the feasibility, safety, and potential survival benefit of urgent or emergent TAVI in selected CS patients. Procedural success is high, and early intervention appears to confer improved short-term and mid-term outcomes compared to balloon aortic valvuloplasty or medical therapy alone. Critical factors influencing prognosis include lactate levels, left ventricular ejection fraction, renal function, and timing of intervention. The absence of formal guidelines, logistical constraints, and ethical concerns complicate decision-making in this unstable population. A multidisciplinary Heart Team/Shock Team approach is essential to identify appropriate candidates, manage procedural risk, and guide post-intervention care. Further studies and the development of TAVI-specific risk models in CS are anticipated to refine patient selection and therapeutic strategies. TAVI may represent a transformative option for stabilizing hemodynamics and improving outcomes in this otherwise high-mortality group. Full article
(This article belongs to the Special Issue Aortic Valve Implantation: Recent Advances and Future Prospects)
Show Figures

Figure 1

30 pages, 893 KB  
Review
A Comprehensive Review and Benchmarking of Fairness-Aware Variants of Machine Learning Models
by George Raftopoulos, Nikos Fazakis, Gregory Davrazos and Sotiris Kotsiantis
Algorithms 2025, 18(7), 435; https://doi.org/10.3390/a18070435 - 16 Jul 2025
Viewed by 772
Abstract
Fairness is a fundamental virtue in machine learning systems, alongside with four other critical virtues: Accountability, Transparency, Ethics, and Performance (FATE + Performance). Ensuring fairness has been a central research focus, leading to the development of various mitigation strategies in the literature. These [...] Read more.
Fairness is a fundamental virtue in machine learning systems, alongside with four other critical virtues: Accountability, Transparency, Ethics, and Performance (FATE + Performance). Ensuring fairness has been a central research focus, leading to the development of various mitigation strategies in the literature. These approaches can generally be categorized into three main techniques: pre-processing (modifying data before training), in-processing (incorporating fairness constraints during training), and post-processing (adjusting outputs after model training). Beyond these, an increasingly explored avenue is the direct modification of existing algorithms, aiming to embed fairness constraints into their design while preserving or even enhancing predictive performance. This paper presents a comprehensive survey of classical machine learning models that have been modified or enhanced to improve fairness concerning sensitive attributes (e.g., gender, race). We analyze these adaptations in terms of their methodological adjustments, impact on algorithmic bias and ability to maintain predictive performance comparable to the original models. Full article
Show Figures

Graphical abstract

16 pages, 767 KB  
Article
Male Layer-Type Birds (Lohmann Brown Classic Hybrid) as a Meat Source for Chicken Pâtés
by Nikolay Kolev, Desislav Balev, Stefan Dragoev, Teodora Popova, Evgeni Petkov, Krasimir Dimov, Surendranath Suman, Ana Paula Salim and Desislava Vlahova-Vangelova
Appl. Sci. 2025, 15(12), 6702; https://doi.org/10.3390/app15126702 - 14 Jun 2025
Viewed by 587
Abstract
The valorisation of underutilized male layer-type chickens offers a sustainable and ethically aligned opportunity for the poultry industry. This study evaluated the feasibility of male layer-type chicken meat in the production of chicken pâtés and compared the effects of different meat sources—commercial broiler [...] Read more.
The valorisation of underutilized male layer-type chickens offers a sustainable and ethically aligned opportunity for the poultry industry. This study evaluated the feasibility of male layer-type chicken meat in the production of chicken pâtés and compared the effects of different meat sources—commercial broiler (CP), and 5 (5wP) and 9-week-old (9wP) male layer-type chickens—on product quality during refrigerated storage using the general linear model with the Tukey–Kramer post-hoc test. Pâtés made from 5wP meat exhibited the most favourable technological properties, including the lowest (p < 0.05) total expressible fluid (TEF), highest (p < 0.05) water retention (TEFWater), and lowest (p < 0.05) fat content (TEFFat) than CP and 9wP indicating superior emulsion stability. The 5wP pâtés also presented the lowest (p < 0.05) TBARS values on day 1, along with reduced colour deterioration (ΔE) over 7 days of storage. CP samples demonstrated the greatest (p < 0.05) hardness, cohesiveness, and gumminess, but lower (p < 0.05) springiness and resilience compared to 5wP and 9wP, yielding softer and elastic pâtés. Overall, pâtés formulated with 5wP can be a promising option for the development of value-added poultry products. The incorporation of male layer-type chicken meat into commercial formulations will encourage further research of their market potential. Full article
Show Figures

Figure 1

16 pages, 3593 KB  
Article
Preservation of Synagogues in Greece: Using Digital Tools to Represent Lost Heritage
by Elias Messinas
Heritage 2025, 8(6), 211; https://doi.org/10.3390/heritage8060211 - 5 Jun 2025
Viewed by 895
Abstract
In the wake of the Holocaust and the post-war reconstruction of Greece’s historic city centers, many Greek synagogues were demolished, abandoned, or appropriated, erasing centuries of Jewish architectural and communal presence. This study presents a thirty year-long research and documentation initiative aimed at [...] Read more.
In the wake of the Holocaust and the post-war reconstruction of Greece’s historic city centers, many Greek synagogues were demolished, abandoned, or appropriated, erasing centuries of Jewish architectural and communal presence. This study presents a thirty year-long research and documentation initiative aimed at preserving, recovering, and eventually digitally reconstructing these “lost” synagogues, both as individual buildings and within their urban context. Drawing on architectural surveys, archival research, oral histories, and previously unpublished materials, including the recently rediscovered Shemtov Samuel archive, the project grew through the use of technology. Beginning with in situ surveys in the early 1990s, it evolved into full-scale digitally enhanced architectural drawings that formed the basis for further digital exploration, 3D models, and virtual reality outputs. With the addition of these new tools to existing documentation, the project can restore architectural detail and cultural context with a high degree of fidelity, even in cases where only fragmentary evidence survives. These digital reconstructions have informed physical restoration efforts as well as public exhibitions, heritage education, and urban memory initiatives across Greece. By reintroducing “invisible” Jewish landmarks into contemporary consciousness, the study addresses the broader implications of post-war urban homogenization, the marginalization of minority heritage, and the ethical dimensions of digital preservation. This interdisciplinary approach, which bridges architectural history, digital humanities, urban studies, and cultural heritage, demonstrates the value of digital tools in reconstructing “lost” pasts and highlights the potential for similar projects in other regions facing comparable erasures. Full article
Show Figures

Figure 1

26 pages, 4445 KB  
Review
Effectiveness of Artificial Intelligence Models in Predicting Lung Cancer Recurrence: A Gene Biomarker-Driven Review
by Niloufar Pourakbar, Alireza Motamedi, Mahta Pashapour, Mohammad Emad Sharifi, Seyedemad Seyedgholami Sharabiani, Asra Fazlollahi, Hamid Abdollahi, Arman Rahmim and Sahar Rezaei
Cancers 2025, 17(11), 1892; https://doi.org/10.3390/cancers17111892 - 5 Jun 2025
Cited by 1 | Viewed by 1951
Abstract
Background/Objectives: Lung cancer recurrence, particularly in NSCLC, remains a major challenge, with 30–70% of patients relapsing post-treatment. Traditional predictors like TNM staging and histopathology fail to account for tumor heterogeneity and immune dynamics. This review evaluates AI models integrating gene biomarkers (TP53, KRAS, [...] Read more.
Background/Objectives: Lung cancer recurrence, particularly in NSCLC, remains a major challenge, with 30–70% of patients relapsing post-treatment. Traditional predictors like TNM staging and histopathology fail to account for tumor heterogeneity and immune dynamics. This review evaluates AI models integrating gene biomarkers (TP53, KRAS, FOXP3, PD-L1, and CD8) to enhance the recurrence prediction and improve the personalized risk stratification. Methods: Following the PRISMA guidelines, we systematically reviewed AI-driven recurrence prediction models for lung cancer, focusing on genomic biomarkers. Studies were selected based on predefined criteria, emphasizing AI/ML approaches integrating gene expression, radiomics, and clinical data. Data extraction covered the study design, AI algorithms (e.g., neural networks, SVM, and gradient boosting), performance metrics (AUC and sensitivity), and clinical applicability. Two reviewers independently screened and assessed studies to ensure accuracy and minimize bias. Results: A literature analysis of 18 studies (2019–2024) from 14 countries, covering 4861 NSCLC and small cell lung cancer patients, showed that AI models outperformed conventional methods. AI achieved AUCs of 0.73–0.92 compared to 0.61 for TNM staging. Multi-modal approaches integrating gene expression (PDIA3 and MYH11), radiomics, and clinical data improved accuracy, with SVM-based models reaching a 92% AUC. Key predictors included immune-related signatures (e.g., tumor-infiltrating NK cells and PD-L1 expression) and pathway alterations (NF-κB and JAK-STAT). However, small cohorts (41–1348 patients), data heterogeneity, and limited external validation remained challenges. Conclusions: AI-driven models hold potential for recurrence prediction and guiding adjuvant therapies in high-risk NSCLC patients. Expanding multi-institutional datasets, standardizing validation, and improving clinical integration are crucial for real-world adoption. Optimizing biomarker panels and using AI trustworthily and ethically could enhance precision oncology, enabling early, tailored interventions to reduce mortality. Full article
Show Figures

Figure 1

10 pages, 267 KB  
Article
Dataset on Programming Competencies Development Using Scratch and a Recommender System in a Non-WEIRD Primary School Context
by Jesennia Cárdenas-Cobo, Cristian Vidal-Silva and Nicolás Máquez
Data 2025, 10(6), 86; https://doi.org/10.3390/data10060086 - 3 Jun 2025
Viewed by 557
Abstract
The ability to program has become an essential competence for individuals in an increasingly digital world. However, access to programming education remains unequal, particularly in non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) contexts. This study presents a dataset resulting from an educational intervention [...] Read more.
The ability to program has become an essential competence for individuals in an increasingly digital world. However, access to programming education remains unequal, particularly in non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) contexts. This study presents a dataset resulting from an educational intervention designed to foster programming competencies and computational thinking skills among primary school students aged 8 to 12 years in Milagro, Ecuador. The intervention integrated Scratch, a block-based programming environment that simplifies coding by eliminating syntactic barriers, and the CARAMBA recommendation system, which provided personalized learning paths based on students’ progression and preferences. A structured educational process was implemented, including an initial diagnostic test to assess logical reasoning, guided activities in Scratch to build foundational skills, a phase of personalized practice with CARAMBA, and a final computational thinking evaluation using a validated assessment instrument. The resulting dataset encompasses diverse information: demographic data, logical reasoning test scores, computational thinking test results pre- and post-intervention, activity logs from Scratch, recommendation histories from CARAMBA, and qualitative feedback from university student tutors who supported the intervention. The dataset is anonymized, ethically collected, and made available under a CC-BY 4.0 license to encourage reuse. This resource is particularly valuable for researchers and practitioners interested in computational thinking development, educational data mining, personalized learning systems, and digital equity initiatives. It supports comparative studies between WEIRD and non-WEIRD populations, validation of adaptive learning models, and the design of inclusive programming curricula. Furthermore, the dataset enables the application of machine learning techniques to predict educational outcomes and optimize personalized educational strategies. By offering this dataset openly, the study contributes to filling critical gaps in educational research, promoting inclusive access to programming education, and fostering a more comprehensive understanding of how computational competencies can be developed across diverse socioeconomic and cultural contexts. Full article
Show Figures

Figure 1

13 pages, 1609 KB  
Article
Comparative Evaluation of Natural Mouthrinses and Chlorhexidine in Dental Plaque Management: A Pilot Randomized Clinical Trial
by Ioana Elena Lile, Tareq Hajaj, Ioana Veja, Tiberiu Hosszu, Ligia Luminița Vaida, Liana Todor, Otilia Stana, Ramona-Amina Popovici and Diana Marian
Healthcare 2025, 13(10), 1181; https://doi.org/10.3390/healthcare13101181 - 19 May 2025
Viewed by 2445
Abstract
Aim: This study evaluated the efficacy of mouthrinses containing natural compounds—specifically, propolis and green tea extracts—in reducing bacterial dental plaque compared to a placebo and a 0.2% chlorhexidine rinse. We hypothesized that these natural compounds would significantly reduce plaque accumulation, with efficacy comparable [...] Read more.
Aim: This study evaluated the efficacy of mouthrinses containing natural compounds—specifically, propolis and green tea extracts—in reducing bacterial dental plaque compared to a placebo and a 0.2% chlorhexidine rinse. We hypothesized that these natural compounds would significantly reduce plaque accumulation, with efficacy comparable to chlorhexidine. Objective: The objective was to evaluate the short-term efficacy of two natural mouthrinses—10% propolis and 5% green tea—compared to a placebo and 0.2% chlorhexidine in reducing dental plaque. Trial Design: The trial design was a randomized, placebo-controlled, parallel-group clinical trial with a 1:1:1:1 allocation ratio. Materials and Methods: In a single-blind, randomized, controlled trial, 60 healthy adult volunteers received a professional mechanical plaque removal (PMPR) and were then randomized into four groups (n = 15 each): a propolis mouthwash, a green tea mouthwash, a 0.2% chlorhexidine mouthwash (positive control), and a placebo rinse. The participants rinsed twice daily for four weeks in addition to standard tooth brushing. The plaque levels were assessed using the Silness–Löe plaque index at baseline and after four weeks. The data were analyzed using ANOVA and post hoc tests (α = 0.05). Ethical approval and informed consent were obtained. Results: All groups had similar baseline plaque scores (≈2.5). After four weeks, the propolis and green tea groups showed significant reductions in plaque (mean indices of 1.02 and 1.12, respectively) compared to the placebo group (mean index = 2.01, p < 0.001). The chlorhexidine group achieved a mean plaque index of 0.90. The propolis rinse showed no significant difference from chlorhexidine (p = 0.40), indicating comparable efficacy. The green tea rinse had a slightly higher plaque index than chlorhexidine (p = 0.03). No significant adverse effects were reported. Conclusions: Mouthwashes containing 10% propolis or 5% green tea significantly reduced dental plaque, with propolis demonstrating efficacy comparable to 0.2% chlorhexidine. Full article
(This article belongs to the Special Issue Contemporary Oral and Dental Health Care: Issues and Challenges)
Show Figures

Figure 1

29 pages, 7061 KB  
Article
Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners
by Fawad Naseer and Sarwar Khawaja
Appl. Sci. 2025, 15(8), 4473; https://doi.org/10.3390/app15084473 - 18 Apr 2025
Cited by 3 | Viewed by 2082
Abstract
Adaptation through Artificial Intelligence (AI) creates individual-centered feedback strategies to reduce academic achievement disparities among students. The study evaluates the effectiveness of AI-driven adaptive feedback in mitigating these gaps by providing personalized learning support to struggling learners. A learning analytics-based evaluation was conducted [...] Read more.
Adaptation through Artificial Intelligence (AI) creates individual-centered feedback strategies to reduce academic achievement disparities among students. The study evaluates the effectiveness of AI-driven adaptive feedback in mitigating these gaps by providing personalized learning support to struggling learners. A learning analytics-based evaluation was conducted on 700 undergraduate students enrolled in STEM-related courses across three different departments at Beaconhouse International College (BIC). The study employed a quasi-experimental design, where 350 students received AI-driven adaptive feedback while the control group followed traditional instructor-led feedback methods. Data were collected over 20 weeks, utilizing pre- and post-assessments, real-time engagement tracking, and survey responses. Results indicate that students receiving AI-driven adaptive feedback demonstrated a 28% improvement in conceptual mastery, compared to 14% in the control group. Additionally, student engagement increased by 35%, with a 22% reduction in cognitive overload. Analysis of interaction logs revealed that frequent engagement with AI-generated feedback led to a 40% increase in retention rates. Despite these benefits, variations in impact were observed based on prior knowledge levels and interaction consistency. The findings highlight the potential of AI-driven smart learning environments to enhance educational equity. Future research should explore long-term effects, scalability, and ethical considerations in adaptive AI-based learning systems. Full article
(This article belongs to the Special Issue Application of Smart Learning in Education)
Show Figures

Figure 1

13 pages, 313 KB  
Article
Flexibility and Moral Cultivation in the Analects of Confucius
by Henry Allen
Religions 2025, 16(4), 441; https://doi.org/10.3390/rel16040441 - 28 Mar 2025
Viewed by 657
Abstract
Flexibility, or acting in line with the needs of the situation rather than strictly adhering to prefigured rules and principles, has long been seen as a primary feature of early Confucian ethics as articulated in the Analects of Confucius. This paper develops an [...] Read more.
Flexibility, or acting in line with the needs of the situation rather than strictly adhering to prefigured rules and principles, has long been seen as a primary feature of early Confucian ethics as articulated in the Analects of Confucius. This paper develops an understanding of flexibility in the Analects through a close reading of two passages, 18.8 and 17.4. The goal is to both add nuance to standard readings of flexibility in the text and contribute to contemporary discourse, where consideration of moral flexibility is lacking. I show that while flexibility in the Analects is presented as an exemplary ethical approach, it requires a high level of moral cultivation, making it inaccessible to many. For those incapable of a flexible approach, a rigid approach that strictly adheres to rules and principles provides a means of both proper conduct and further ethical development. The Analects thus offers a fairly nuanced consideration of the notion of flexibility and the role of moral cultivation that can enliven contemporary ethical discourse. Full article
30 pages, 1605 KB  
Article
From Misinformation to Insight: Machine Learning Strategies for Fake News Detection
by Despoina Mouratidis, Andreas Kanavos and Katia Kermanidis
Information 2025, 16(3), 189; https://doi.org/10.3390/info16030189 - 28 Feb 2025
Cited by 2 | Viewed by 7435
Abstract
In the digital age, the rapid proliferation of misinformation and disinformation poses a critical challenge to societal trust and the integrity of public discourse. This study presents a comprehensive machine learning framework for fake news detection, integrating advanced natural language processing techniques and [...] Read more.
In the digital age, the rapid proliferation of misinformation and disinformation poses a critical challenge to societal trust and the integrity of public discourse. This study presents a comprehensive machine learning framework for fake news detection, integrating advanced natural language processing techniques and deep learning architectures. We rigorously evaluate a diverse set of detection models across multiple content types, including social media posts, news articles, and user-generated comments. Our approach systematically compares traditional machine learning classifiers (Naïve Bayes, SVMs, Random Forest) with state-of-the-art deep learning models, such as CNNs, LSTMs, and BERT, while incorporating optimized vectorization techniques, including TF-IDF, Word2Vec, and contextual embeddings. Through extensive experimentation across multiple datasets, our results demonstrate that BERT-based models consistently achieve superior performance, significantly improving detection accuracy in complex misinformation scenarios. Furthermore, we extend the evaluation beyond conventional accuracy metrics by incorporating the Matthews Correlation Coefficient (MCC) and Receiver Operating Characteristic–Area Under the Curve (ROC–AUC), ensuring a robust and interpretable assessment of model efficacy. Beyond technical advancements, we explore the ethical implications of automated misinformation detection, addressing concerns related to censorship, algorithmic bias, and the trade-off between content moderation and freedom of expression. This research not only advances the methodological landscape of fake news detection but also contributes to the broader discourse on safeguarding democratic values, media integrity, and responsible AI deployment in digital environments. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Graphical abstract

21 pages, 2702 KB  
Article
Analyzing Fairness of Computer Vision and Natural Language Processing Models
by Ahmed Rashed, Abdelkrim Kallich and Mohamed Eltayeb
Information 2025, 16(3), 182; https://doi.org/10.3390/info16030182 - 27 Feb 2025
Viewed by 2479
Abstract
Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research [...] Read more.
Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models. The primary objective is to present a comparative analysis of the performance of mitigation algorithms from the two fairness libraries. This analysis involves applying the algorithms individually, one at a time, in one of the stages of the ML lifecycle, pre-processing, in-processing, or post-processing, as well as sequentially across more than one stage. The results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model’s performance. Publicly available datasets from Kaggle were chosen for this research, providing a practical context for evaluating fairness in real-world machine learning workflows. Full article
Show Figures

Graphical abstract

Back to TopTop