Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (834)

Search Parameters:
Keywords = human-AI interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 738 KB  
Article
Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context
by Sven Kottmann and Jürgen Seitz
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 113; https://doi.org/10.3390/jtaer21040113 - 7 Apr 2026
Abstract
This study investigates the acceptance of an AI-powered decision-support chatbot among professionals in a marketing and sales context, addressing a gap in technology acceptance research by examining data-intensive decision environments that remain underexplored. Building on the Unified Theory of Acceptance and Use of [...] Read more.
This study investigates the acceptance of an AI-powered decision-support chatbot among professionals in a marketing and sales context, addressing a gap in technology acceptance research by examining data-intensive decision environments that remain underexplored. Building on the Unified Theory of Acceptance and Use of Technology (UTAUT), the study proposes an extended model incorporating Behavioral Intention, Performance Expectancy, Effort Expectancy, Social Influence, Output Quality, Time Saving, Source Trustworthiness, Cognitive Load, and Chatbot Self-Efficacy. An experimental study was conducted with 106 professionals using a chatbot-enhanced business analytics platform to complete marketing KPI analysis tasks. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results demonstrate that Behavioral Intention to use decision-support chatbots is significantly influenced by Performance Expectancy, Effort Expectancy, and Social Influence. Performance Expectancy is strongly driven by Output Quality, Time Saving, and Source Trustworthiness, while Effort Expectancy is significantly shaped by reduced Cognitive Load and higher Chatbot Self-Efficacy. The findings suggest that chatbot acceptance in professional decision-making depends not only on usability and performance beliefs but also on cognitive relief, trust in information sources, and efficiency gains, highlighting important implications for both theory and the design of AI-based decision-support systems. Full article
(This article belongs to the Special Issue Emerging Technologies and Marketing Innovation)
Show Figures

Figure 1

46 pages, 3809 KB  
Review
Overview on Predictive Maintenance Techniques for Turbomachinery
by Pierpaolo Dini, Damiano Nardi and Sergio Saponara
Machines 2026, 14(4), 396; https://doi.org/10.3390/machines14040396 - 5 Apr 2026
Viewed by 89
Abstract
Within the Industry 5.0 paradigm, the management of critical assets requires advanced digital architectures capable of ensuring resilience and operational sustainability. The present systematic review analyzes the state of the art in predictive maintenance (PdM) technologies for turbines and turbomachinery, providing a technical [...] Read more.
Within the Industry 5.0 paradigm, the management of critical assets requires advanced digital architectures capable of ensuring resilience and operational sustainability. The present systematic review analyzes the state of the art in predictive maintenance (PdM) technologies for turbines and turbomachinery, providing a technical examination of anomaly and fault detection frameworks, extended to remaining useful life (RUL) estimation and root cause analysis (RCA). The work addresses inherent sectoral challenges, ranging from the processing of high-dimensional multivariate time series (MTS) from Supervisory Control and Data Acquisition (SCADA) systems to labeled data scarcity and signal non-stationarity in real-world environments. Both purely data-driven frameworks and hybrid physics-informed models, such as Physics-Informed Neural Networks (PINNs), are critically evaluated against performance indicators. A significant contribution of this study lies in the classification of methodologies based on their readiness for real-time inference, emphasizing the role of Explainable AI (XAI) in providing transparent insights to domain experts, who remain central to decision-making processes. The primary objective of this review is to offer an analytical overview of progress to date against current technological gaps, tracing a clear trajectory for future developments. In this regard, the adoption of Generative AI and Large Language Models (LLMs) is identified as a fundamental step toward evolving into interactive, human-centric decision support systems. Full article
Show Figures

Figure 1

31 pages, 3970 KB  
Review
Impact of Generative AI on Author’s Metrics and Copyright Ownership: Digital Labour, Ethical Attribution, and Traceability Frameworks for Future Internet Systems
by Chukwuebuka Joseph Ejiyi, Sandra Chukwudumebi Obiora, Ijuolachi Obiora, Gladys Wauk, Maryjane Ejiako, Temitope Omotayo and Olusola Bamisile
Future Internet 2026, 18(4), 196; https://doi.org/10.3390/fi18040196 - 4 Apr 2026
Viewed by 257
Abstract
The integration of generative artificial intelligence (GAI) into digital learning environments is a profound socio-technical transformation. While GAI promises enhanced accessibility and efficiency, it simultaneously obscures the human creativity and intellectual labour that underpins digital knowledge production. This opacity limits creators’ visibility into [...] Read more.
The integration of generative artificial intelligence (GAI) into digital learning environments is a profound socio-technical transformation. While GAI promises enhanced accessibility and efficiency, it simultaneously obscures the human creativity and intellectual labour that underpins digital knowledge production. This opacity limits creators’ visibility into how their work is used, evaluated, and monetised. This review application work investigates how several leading large language models, including ChatGPT (GPT-4o), Gemini (1.5 Flash), and DeepSeek (V3), interact with a creative platform hosting over 300 original essays, poems, and artworks from various human creatives. Our review reveals that despite clear evidence of models engaging with original materials, standard platform analytics of the average creative record no attribution, referrals, or traceable interaction from their end, rendering creators’ labour invisible. This compels critical examination of knowledge provenance and power within AI-mediated education. To address this, we propose a socio-technical framework, Chujoyi-TraceNet, not as a technical fix, but a mechanism to re-centre ethics, justice, and recognition in digital governance. By integrating real-time tracking, blockchain-enabled licensing, and metadata watermarking, Chujoyi-TraceNet operationalises the principles of equitable attribution. This study argues for a re-imagining of digital ecosystems in education, one that links the technical act of attribution to broader debates on digital labour, platform ethics, and the pursuit of social justice, thereby contributing to more democratic and accountable learning media in the era of Industry 4.0 and 5.0. Full article
Show Figures

Graphical abstract

27 pages, 1840 KB  
Review
Human-Centric Modeling in Metastatic Breast Cancer: Organoids, Organ-on-Chip Systems, and New Approach Methodologies in the Post-FDA Modernization Act 2.0 Era
by Hissah Alatawi, Haritha H. Nair, Asif Raza, Emiliana Velez, Arun K. Sharma and Satya Narayan
Cancers 2026, 18(7), 1166; https://doi.org/10.3390/cancers18071166 - 4 Apr 2026
Viewed by 125
Abstract
Metastatic breast cancer (MBC) remains an overwhelming clinical challenge due to its inherent clonal evolution and the frequent development of drug resistance. A significant hurdle in therapeutic discovery is the reliance on traditional 2D cell cultures and animal models, which often fail to [...] Read more.
Metastatic breast cancer (MBC) remains an overwhelming clinical challenge due to its inherent clonal evolution and the frequent development of drug resistance. A significant hurdle in therapeutic discovery is the reliance on traditional 2D cell cultures and animal models, which often fail to accurately replicate human tumor pathophysiology or predict clinical responses. Consequently, the field of oncology is increasingly exploring a transition towards human-centric research that prioritizes biological data derived directly from patients. Considering the FDA Modernization Act 2.0 and the 2025 FDA Roadmap, frameworks are being established to explore the integration of new approach methodologies (NAMs)—including patient-derived organoids (PDOs) and organ-on-a-chip (OoC) systems—into the drug development pipeline. This review examines how these platforms aim to better simulate the human physiological environment by capturing the complex architecture and microenvironment of the tumor. We further discuss how the integration of these models with Artificial Intelligence (AI), spatial multi-omics, and real-time liquid biopsies is being investigated to enhance the speed and precision of therapeutic testing. While still in the translational phase, emerging evidence suggests that human-centric platforms may eventually support rapid functional drug screening, potentially informing patient treatment responses within clinically relevant timeframes. Strengthening the biological link between the patient and their longitudinal data represents a promising strategy to address the complexities of MBC and improve clinical outcomes. These human-centric platforms preserve patient-specific tumor heterogeneity, recapitulate microenvironmental interactions, and enable functional drug testing under physiologically relevant conditions, thereby improving translational accuracy compared to conventional models. Full article
(This article belongs to the Special Issue Advancements in Preclinical Models for Solid Cancers)
Show Figures

Figure 1

19 pages, 10048 KB  
Article
How AI-Assisted Decision-Making Paradigms and Explainability Shape Human-AI Collaboration
by Yingying Wang, Qin Ni, Tingjiang Wei, Haoxin Xu, Lu Liu and Liang He
Sustainability 2026, 18(7), 3516; https://doi.org/10.3390/su18073516 - 3 Apr 2026
Viewed by 147
Abstract
The increasing integration of artificial intelligence (AI) in educational decision-making raises a critical question: how to design AI systems that can effectively support teachers while maintaining an appropriate level of trust. Addressing this question requires not only continuous improvements in the technical capabilities [...] Read more.
The increasing integration of artificial intelligence (AI) in educational decision-making raises a critical question: how to design AI systems that can effectively support teachers while maintaining an appropriate level of trust. Addressing this question requires not only continuous improvements in the technical capabilities of AI systems but also an examination from a human-AI interaction perspective of how different system designs influence users’ cognitive performance and affective responses, thereby providing guidance for system optimization and design. Therefore, this study conducted a randomized controlled experiment with 120 pre-service teachers to investigate how AI-assisted decision-making paradigms and AI explainability jointly influence teachers’ task performance and trust in AI, and whether these effects transfer to subsequent independent tasks. The results indicate that the effect of explanatory interface on task performance is context dependent and yields an immediate positive impact. Under the concurrent paradigm, the explanatory interface of the AI system significantly improves immediate task performance, whereas no significant effect is observed under the sequential paradigm. Moreover, this improvement is confined to the task execution stage and does not transfer to subsequent independent tasks. In contrast, the effect of explanatory interface on trust exhibits a delayed and negative pattern. The explanatory interface has no significant impact on situational trust, while it exerts a negative effect on learned trust and suppresses the natural development of both cognitive trust and emotional trust. In addition, different AI-assisted decision-making paradigms exhibit distinct patterns of influence on task performance and trust. Although the concurrent paradigm performs worse than the sequential paradigm in terms of immediate task performance, it is more effective in promoting users’ emotional trust. Overall, these findings extend the theoretical understanding of the mechanisms of explainability in human-AI interaction and provide empirical evidence for the joint design of explainable AI systems and human-AI collaboration paradigms. Full article
(This article belongs to the Special Issue AI for Sustainable and Creative Learning in Education)
Show Figures

Figure 1

42 pages, 1024 KB  
Review
From Concrete to Code: A Survey of AI-Driven Transportation Infrastructure, Security, and Human Interaction
by Nuri Alperen Kose, Kubra Kose and Fan Liang
Sensors 2026, 26(7), 2219; https://doi.org/10.3390/s26072219 - 3 Apr 2026
Viewed by 343
Abstract
The transition to AI-driven Cyber–Physical Systems has fundamentally reshaped transportation, introducing systemic risks that transcend traditional physical boundaries. Unlike prior reviews focused on isolated technological domains, this survey proposes a novel “End-to-End” analytical framework that models the causal propagation of vulnerabilities from physical [...] Read more.
The transition to AI-driven Cyber–Physical Systems has fundamentally reshaped transportation, introducing systemic risks that transcend traditional physical boundaries. Unlike prior reviews focused on isolated technological domains, this survey proposes a novel “End-to-End” analytical framework that models the causal propagation of vulnerabilities from physical sensing hardware to human cognitive responses. Synthesizing 140 research contributions (2017–2025), we evaluate the paradigm shift from deterministic control to Generative AI and Large Language Models (Transportation 5.0). To substantiate our framework, we introduce a structured cross-layer threat matrix and mathematically formalize the technology–cognition cascade, explicitly mapping how physical layer perturbations, such as optical jamming, bypass digital edge security to trigger hazardous behavioral reactions in human drivers. We conclude that ensuring the resilience of next-generation infrastructure requires a unified analytical architecture that formally bounds hardware constraints, algorithmic safety, and human trust. Full article
Show Figures

Figure 1

16 pages, 589 KB  
Article
Exploring the Mechanisms Influencing Graduate Students’ Adoption of Generative AI: Insights from the Technology Acceptance Model
by Qing Chen, Yujie Xue, Jie Lin and Chang Zhu
Big Data Cogn. Comput. 2026, 10(4), 108; https://doi.org/10.3390/bdcc10040108 - 3 Apr 2026
Viewed by 261
Abstract
The rapid development of Generative Artificial Intelligence (GenAI) in graduate education has changed human–AI interaction within knowledge-intensive environments, leading to important questions about user-side cognitive adaptation in probabilistic AI systems. While many studies focus on ethical implications, limited attention has been paid to [...] Read more.
The rapid development of Generative Artificial Intelligence (GenAI) in graduate education has changed human–AI interaction within knowledge-intensive environments, leading to important questions about user-side cognitive adaptation in probabilistic AI systems. While many studies focus on ethical implications, limited attention has been paid to the cognitive mechanisms underlying graduate students’ adoption of GenAI. Drawing on the Technology Acceptance Model (TAM), this study explores the cognitive and interactional mechanisms shaping graduate students’ adoption and usage of GenAI. Using thematic analysis of in-depth interviews with 20 graduate students from diverse academic backgrounds, the study identifies seven interrelated constructs: perceived usefulness, perceived ease of use, external environment, risk perception, attitude, behavioral intention, and interaction subjectivity. This study demonstrates that the adoption of GenAI is not merely a result of perceived efficiency but is shaped by cognitive calibration between trust and risk evaluation. Moreover, interaction subjectivity emerges as a metacognitive factor that determines whether engagement results in human–AI collaboration or passive automation. By integrating external environment, risk perception, and interaction subjectivity, this study provides a cognitively grounded framework for understanding human–AI adoption and interaction dynamics. Practically, the findings provide design-relevant insights for developing GenAI systems that support calibrated trust, uncertainty awareness, and adaptive cognitive participation. Full article
Show Figures

Figure 1

19 pages, 3221 KB  
Tutorial
Cyber–Physical Systems: The Last Defense
by Frank J. Furrer
Appl. Sci. 2026, 16(7), 3467; https://doi.org/10.3390/app16073467 - 2 Apr 2026
Viewed by 279
Abstract
The development, evolution, and operation of a cyber–physical system are cross-domain, holistic processes. The process encompasses all elements of a cyber–physical system, including computation infrastructure, software, interfaces to the physical world, human interactions, and safety and security engineering. The process is holistic because [...] Read more.
The development, evolution, and operation of a cyber–physical system are cross-domain, holistic processes. The process encompasses all elements of a cyber–physical system, including computation infrastructure, software, interfaces to the physical world, human interactions, and safety and security engineering. The process is holistic because it must assure conceptual integrity and correct interoperability across all elements of the CPS. Unfortunately, at every stage of this process, vulnerabilities can be introduced into the system (due to negligence, mistakes, lack of skills, malicious activities, etc.). These dormant vulnerabilities can cause failures of the runtime system, possibly resulting in damage, loss of property or life, safety accidents, or security incidents. A promising approach to mitigate such risks is runtime anomaly detection using artificial intelligence/machine learning. This tutorial paper introduces the fundamental concepts of AI/ML anomaly detection and describes the corresponding intervention mechanisms. Automated intervention mechanisms are the last line of defense against failures, faults, malfunctions, and malicious activities—and their unfortunate consequences. The paper remains at the conceptual level and defers implementation details to subsequent publications. The content addresses advanced students (at the master’s level) and researchers entering this fascinating field. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
Show Figures

Figure 1

25 pages, 869 KB  
Article
Fostering Sustainable Learning via Embodied Intelligence: The E3-HOT Framework for Higher-Order Thinking in the AI Era
by Hanzi Zhu, Xin Jiang, Xiaolei Zhang, Huiying Xu, Deang Su, Zhendong Chen and Xinzhong Zhu
Sustainability 2026, 18(7), 3469; https://doi.org/10.3390/su18073469 - 2 Apr 2026
Viewed by 166
Abstract
Artificial intelligence (AI) can help students accelerate assignment completion, but it may also foster cognitive outsourcing and learning detached from authentic contexts. This paper presents E3-HOT, a conceptual framework that leverages embodied intelligence to sustain learners’ cognitive agency and higher-order thinking for sustainable [...] Read more.
Artificial intelligence (AI) can help students accelerate assignment completion, but it may also foster cognitive outsourcing and learning detached from authentic contexts. This paper presents E3-HOT, a conceptual framework that leverages embodied intelligence to sustain learners’ cognitive agency and higher-order thinking for sustainable learning, aligned with SDG 4 (Sustainable Development Goal 4) and its emphasis on inclusive and equitable quality education and lifelong learning. Using an iterative conceptual synthesis, we distill three embodied pathways—situational embedding, embodied participation, and cognitive creation—and translate them into a practical system design with a three-module E3 core. It includes a virtual–real integrated learning environment for rich scenarios, embodied interaction for action and sensing, and an intelligent core that provides bounded and teacher-controlled support. To facilitate equitable adoption across resource-diverse settings, we specify multi-fidelity enactment options and an auditable set of evidence artifacts for subsequent expert review and future validation studies. We further provide an illustrative university human–AI design project that outlines a week-by-week workflow and corresponding evidence plan, presented as a worked example rather than a report of an implemented study. E3-HOT offers a traceable design-and-evidence blueprint without claiming measured learning gains. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

26 pages, 1520 KB  
Article
Dynamic Anthropomorphism and Artificial Empathy in Conversational Agents: A Wizard-of-Oz Experimental Evaluation
by Dimos Nanos and Georgios Lappas
Digital 2026, 6(2), 28; https://doi.org/10.3390/digital6020028 - 2 Apr 2026
Viewed by 311
Abstract
Conversational agents increasingly incorporate socio-emotional cues to support more natural and socially engaging digital interactions. Prior research has shown that anthropomorphism and artificial empathy influence user evaluations; however, these dimensions are typically examined as static design features and often in isolation, leaving limited [...] Read more.
Conversational agents increasingly incorporate socio-emotional cues to support more natural and socially engaging digital interactions. Prior research has shown that anthropomorphism and artificial empathy influence user evaluations; however, these dimensions are typically examined as static design features and often in isolation, leaving limited evidence on how users perceive socio-emotional behavior that adapts dynamically during real-time interaction. This study investigates the perception-based evaluation of adaptive socio-emotional behavior in conversational agents using a controlled Wizard-of-Oz design. In total, 72 participants (N = 72) interacted with a simulated agent across four digital communication channels under conditions of high versus low anthropomorphism and artificial empathy, enabling systematic variation in socio-emotional expression while preserving participants’ perception of autonomous system operation. User evaluations were assessed using established perceptual constructs, including trust, perceived reliability, satisfaction, service quality, perceived empathy, and anthropomorphism. The findings demonstrate that conversational agents exhibiting dynamically adaptive anthropomorphic and empathic behavior elicit consistently more positive user evaluations across all measured constructs compared to non-adaptive interaction. Validation analysis using the Godspeed scale confirmed clear differentiation between experimental conditions, highlighting the role of interaction-contingent adaptation relative to static socio-emotional cues in perceived human likeness and positive user responses. These results indicate that user perception can function as a human-centered evaluation layer for assessing adaptive conversational systems, enabling systematic measurement of socio-emotional performance under controlled conditions. More broadly, this study supports the design of adaptive AI systems that leverage real-time socio-emotional feedback to enhance trust, perceived service quality, and behavioral acceptance in digital service environments within a controlled Wizard-of-Oz evaluation context. Full article
Show Figures

Figure 1

27 pages, 3026 KB  
Article
Administrative Perspectives on Digital Workflow Transformation and Artificial Intelligence Implementation in Dental Clinics
by Alin Flavius Cozmescu, Ana Cernega, Andreea Cristiana Didilescu, Marina Meleșcanu Imre, Bogdan Dimitriu and Silviu-Mirel Pițuru
Dent. J. 2026, 14(4), 206; https://doi.org/10.3390/dj14040206 - 2 Apr 2026
Viewed by 234
Abstract
Background/Objectives: The digital transformation of dental practice is positioning artificial intelligence (AI) as a key tool for both clinical support and administrative optimization. While clinical uses of AI are well documented, there is limited evidence on managerial perspectives. This study explored how [...] Read more.
Background/Objectives: The digital transformation of dental practice is positioning artificial intelligence (AI) as a key tool for both clinical support and administrative optimization. While clinical uses of AI are well documented, there is limited evidence on managerial perspectives. This study explored how dental clinic managers view digital workflow transformation and AI implementation. Methods: A cross-sectional questionnaire-based study was conducted among 200 managers of dental clinics from urban and rural areas in Bucharest, Romania. The survey evaluated perceived difficulty and availability related to digitalization, current use of digital tools, demographic characteristics (age, professional experience, practice environment), and attitudinal dimensions reflecting digital pragmatism and efficiency versus human impac. Results: Managers demonstrated moderate digital pragmatism (median 2.84, IQR 2.29–3.44), embracing AI mainly when linked to efficiency, operational control, and economic sustainability. Lower perceived difficulty was associated with higher availability, current use of digital tools, younger age, and fewer years of managerial experience. Urban managers were more likely than rural managers to report higher availability and current use of digital tools, although this comparison should be interpreted cautiously given the small rural subgroup. Efficiency considerations outweighed human-impact concerns (median 3.9, IQR 3.46–4.2), yet caution persisted toward solutions requiring major organizational restructuring or potentially affecting clinician–patient interaction. This study is a pilot, exploratory investigation aimed at generating preliminary insights into the phenomenon of interest and refining the methodological approach and hypotheses for subsequent, larger-scale research. Conclusions: Dental clinic managers approach AI adoption through an efficiency-driven and risk-aware framework, favoring incremental digital integration over disruptive transformation. The results underline the need for context-sensitive implementation strategies, managerial training, and targeted support, to ensure that AI-enhanced workflows improve efficiency while preserving organizational stability and patient-centered care. Full article
Show Figures

Figure 1

18 pages, 601 KB  
Article
The Double-Edged Sword of AI Efficiency: Self-Efficacy Erosion as a Mediator Linking Instant Gratification and Perceived AI Efficacy to AI Dependency
by Xuehan Zhu, Aiai Zhang and Jiacheng Zhang
Behav. Sci. 2026, 16(4), 530; https://doi.org/10.3390/bs16040530 - 1 Apr 2026
Viewed by 237
Abstract
Generative AI is becoming integral to daily workflows, fostering a novel form of functional cognitive AI dependency distinct from pathological addiction. While emerging research acknowledges this phenomenon, the specific psychological mechanisms underpinning its development remain underexplored. Incorporating self-efficacy erosion into the reinforcement-based framework, [...] Read more.
Generative AI is becoming integral to daily workflows, fostering a novel form of functional cognitive AI dependency distinct from pathological addiction. While emerging research acknowledges this phenomenon, the specific psychological mechanisms underpinning its development remain underexplored. Incorporating self-efficacy erosion into the reinforcement-based framework, this study investigates whether instant gratification and perceived AI efficacy as key drivers of AI dependency. We examine the model using Structural Equation Modeling (SEM) with cross-sectional data collected from 576 users who have engaged with AI. The results show that both instant gratification and efficient rewards are positively associated with individuals’ AI dependency. Furthermore, users’ self-efficacy erosion significantly mediates the positive relation, supporting the hypothesis that greater reliance on AI is related to lower self-belief and stronger AI dependency. Moderation analyses further indicate that task-domain self-efficacy and social norms strengthen these positive associations. These findings provide empirical support for a mechanism associated with functional AI dependency and offer insights for navigating human–AI interaction while promoting balanced AI adoption. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

23 pages, 276 KB  
Article
Idols as My Cyber Lovers: A Behavioral Research on the Figurational Relationship Between Fans and AI-Customized Virtual Idols
by Xin Wang and Yaxin Zhang
Soc. Sci. 2026, 15(4), 225; https://doi.org/10.3390/socsci15040225 - 1 Apr 2026
Viewed by 248
Abstract
Unlike conventional virtual idols like Hatsune Miku, which rely on pre-set voice libraries and stage scripts, AI-customized virtual idols achieve real-time interaction through generative artificial intelligence, continuously iterating their personality traits, language style, and even value expression along with fan and user interactions. [...] Read more.
Unlike conventional virtual idols like Hatsune Miku, which rely on pre-set voice libraries and stage scripts, AI-customized virtual idols achieve real-time interaction through generative artificial intelligence, continuously iterating their personality traits, language style, and even value expression along with fan and user interactions. AI-customized virtual idols, as pre-defined cultural commodities in the digital age, tend to focus on static, functional interpretations and have not yet fully entered the dynamic construction process as “subjects in the process of generation.” This study, based on a deep mediation perspective, employs a research method combining app roaming and semi-structured interviews to focus on the sociological examination of young fan groups’ use of AI tools to customize virtual idol companionship. It explores the reciprocal relationship between fan groups and customized virtual idols. The study finds that the AI-customized idols fan group constitutes a typical “actor group,” and its interaction practices are essentially a “fluid interaction” of human–machine intimacy. Young fan groups mainly interact with AI-customized virtual idols based on materiality, cognition, visibility, and emotional frames, thereby generating rich meaning production and symbolic imagination during the usage process. Fan groups and AI-customized virtual idols have developed different relationship paths, including mutual attachment, returning to normalcy, seeking substitutes, or direct withdrawal, revealing the inherent contradictions and tensions in digital intimacy, as well as the self-adjustment strategies of individuals under the mediation of technology. This process presents a “human-machine-idol” triadic relationship framework, becoming a new paradigm for intimacy in the digital age. Full article
(This article belongs to the Topic Personality and Cognition in Human–AI Interaction)
17 pages, 2368 KB  
Article
LANTERN-XGB: An Interpretable Multi-Modal Machine Learning for Improving Clinical Decision-Making in Lung Cancer
by Davide Dalfovo, Carolina Sassorossi, Elisa De Paolis, Annalisa Campanella, Dania Nachira, Leonardo Petracca Ciavarella, Luca Boldrini, Esther G. C. Troost, Róza Ádány, Núria Farré, Ece Öztürk, Angelo Minucci, Rocco Trisolini, Emilio Bria, Steffen Löck, Stefano Margaritora and Filippo Lococo
Int. J. Mol. Sci. 2026, 27(7), 3128; https://doi.org/10.3390/ijms27073128 - 30 Mar 2026
Viewed by 290
Abstract
Non-small cell lung cancer (NSCLC) remains the leading cause of cancer-related mortality globally. While multi-modal artificial intelligence (AI) models offer significant predictive potential, their translation into routine clinical practice is delayed by the “black box” nature of complex algorithms and the fragmentation of [...] Read more.
Non-small cell lung cancer (NSCLC) remains the leading cause of cancer-related mortality globally. While multi-modal artificial intelligence (AI) models offer significant predictive potential, their translation into routine clinical practice is delayed by the “black box” nature of complex algorithms and the fragmentation of heterogeneous data. We present LANTERN-XGB, a hierarchical machine learning workflow designed to bridge this gap by generating interpretable “digital human avatars” for precision oncology. The methodology employs a multi-stage scalable tree boosting system (XGBoost) architecture utilizing shapley additive explanations (SHAP) for rigorous hierarchical feature selection, missing value management, and patient-specific decision support. The workflow was developed and benchmarked using a retrospective cohort of 437 patients with clinical N0 NSCLC, followed by validation on a prospective dataset (n = 100) and an independent external dataset (n = 100). The pipeline integrates diverse data modalities to predict occult lymph node metastasis (OLM). LANTERN-XGB identified a robust consensus signature driven by non-linear interactions among CT textural fragmentation, PET metabolic heterogeneity, tumor density distribution, and systemic clinical modulators. Exploratory transcriptomic pathway analysis (GSVA) revealed that high-risk predictions strongly correlate with systemic molecular dysregulation, such as the enrichment of immune-inflammatory signaling and metabolic stress pathways. The model achieved robust discrimination in external validation (AUC ≈ 0.77), performing comparably to state-of-the-art nomogram benchmarks. Crucially, the LANTERN-XGB framework demonstrated superior utility in handling diagnostic ambiguity; local force plots allowed for the correct reclassification of “borderline” prediction by visualizing feature interactions that standard linear models fail to capture. LANTERN-XGB provides a validated, open-source framework that successfully balances predictive power with clinical transparency. By empowering clinicians to visualize and verify the logic behind AI predictions, this workflow offers a pragmatic path for integrating reliable multi-modal avatars into daily medical decision-making. Full article
(This article belongs to the Special Issue Omics Science and Research in Human Health and Disease)
Show Figures

Figure 1

14 pages, 466 KB  
Review
Fidelity, Virtual Human Assistants, and Engagement in Immersive Virtual Learning Environments: The Role of Temporal Functional Fidelity
by Thomas Gaudi, Bill Kapralos and Alvaro Quevedo
Encyclopedia 2026, 6(4), 77; https://doi.org/10.3390/encyclopedia6040077 - 30 Mar 2026
Viewed by 352
Abstract
Advances in consumer virtual reality (VR) and artificial intelligence (AI) have accelerated the use of immersive virtual learning environments (iVLEs) for skills training. Learner engagement is a critical determinant of training effectiveness, which can be shaped by VR system features (e.g., visual, auditory, [...] Read more.
Advances in consumer virtual reality (VR) and artificial intelligence (AI) have accelerated the use of immersive virtual learning environments (iVLEs) for skills training. Learner engagement is a critical determinant of training effectiveness, which can be shaped by VR system features (e.g., visual, auditory, and tactile immersion) coupled with interaction mechanics and instructional design integrated with the instructional behaviors of virtual human assistants (VHAs). Although visual and behavioral fidelity in VHAs have been extensively studied, functional fidelity (i.e., the extent to which the iVLE and/or VHAs support cognitive, perceptual, and motor processes required to perform a task regardless of visual realism), and particularly the temporal alignment of instructional guidance with learners’ cognitive and motor demands, remains underexamined. This article highlights research on VHAs in iVLEs with a special emphasis on temporal functional fidelity as an emerging requirement for synchronizing instructional support with user workload and task phases. By consolidating existing findings and highlighting gaps in current empirical work, this article outlines key implications for the design and evaluation of VHAs and identifies directions for future research aimed at optimizing instructional timing in iVLEs. The goal is to inform principled VHA design and clarify how fidelity dimensions should be integrated to support effective, pedagogically grounded immersive learning experiences. Full article
(This article belongs to the Section Mathematics & Computer Science)
Show Figures

Figure 1

Back to TopTop