Next Article in Journal
System for the Acquisition and Analysis of Maintenance Data of Railway Traffic Control Devices
Previous Article in Journal
Effect of Matric Suction on Shear Strength and Elastic Modulus of Unsaturated Soil in Reconstituted and Undisturbed Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matching Game Preferences Through Dialogical Large Language Models: A Perspective

1
Dionysian Economics Laboratory (LED), Université Paris 8, 93200 Saint-Denis, France
2
Université Paris Sciences et Lettres (PSL), 75006 Paris, France
3
Aix Marseille University, CNRS, LIS, 13007 Marseille, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8307; https://doi.org/10.3390/app15158307
Submission received: 26 June 2025 / Revised: 21 July 2025 / Accepted: 21 July 2025 / Published: 25 July 2025

Abstract

This perspective paper explores the future potential of “conversational intelligence” by examining how Large Language Models (LLMs) could be combined with GRAPHYP’s network system to better understand human conversations and preferences. Using recent research and case studies, we propose a conceptual framework that could make AI reasoning transparent and traceable, allowing humans to see and understand how AI reaches its conclusions. We present the conceptual perspective of “Matching Game Preferences through Dialogical Large Language Models (D-LLMs),” a proposed system that would allow multiple users to share their different preferences through structured conversations. This approach envisions personalizing LLMs by embedding individual user preferences directly into how the model makes decisions. The proposed D-LLM framework would require three main components: (1) reasoning processes that could analyze different search experiences and guide performance, (2) classification systems that would identify user preference patterns, and (3) dialogue approaches that could help humans resolve conflicting information. This perspective framework aims to create an interpretable AI system where users could examine, understand, and combine the different human preferences that influence AI responses, detected through GRAPHYP’s search experience networks. The goal of this perspective is to envision AI systems that would not only provide answers but also show users how those answers were reached, making artificial intelligence more transparent and trustworthy for human decision-making.

1. Introduction

This paper explores the perspective of integrating structured user preference data with Large Language Models (LLMs) to create more personalized and controllable AI systems. We develop a benchmarking framework that combines explicit user preferences stored in knowledge graphs with the conversational capabilities of LLMs, enabling human oversight of AI decision-making [1].
The integration of graph-based reasoning with LLMs has opened up new possibilities for human–AI interaction [2], particularly in developing personalized AI systems that can adapt to individual user preferences [3]. However, current approaches face significant challenges in capturing the nuanced and evolving nature of human preferences, especially in domains where categorizations are contested or context-dependent. In these contexts, system efficiency is reduced as machine learning systems learn from human datasets by using algorithms that build models from observed examples of human behavior or measurement encoded as feature vectors; comprehensive interpretations, however, are challenging.
Contemporary LLMs suffer from two related problems that limit their effectiveness in personalized applications. First, they create an “illusion of understanding” when humans experience cognitive overload from AI-generated content that appears meaningful but lacks genuine comprehension [4]. Second, they produce an “illusion of learning” by generating superficial imitations of human reasoning, without capturing underlying cognitive processes. These limitations collectively contribute to an “illusion of thinking” [5], where sophisticated AI outputs mask fundamental gaps in understanding and reasoning.
For a perspective on overcoming some of those limitations, we propose a novel framework that combines graph-based preference modeling with conversational AI. Our approach builds on the premise that intelligence should be measured not by a system’s internal complexity, but by its ability to support optimal human decision-making through meaningful choices [6,7]. This perspective shifts focus from AI autonomy to human agency supported by an AI companion. Our proposal fits naturally within the framework of the numerous “human-in-the-loop” approaches.

1.1. The D-LLM Framework: A New Division of Work

The perspective of integration of human preference patterns expressed through GRAPHYP Knowledge Graphs with the human choice orientations captured by LLMs holds significant promise for creating AI systems that are better aligned with human values and decision-making processes [8].
This cross-domain approach exploits the complementary strengths of structured representations in knowledge graphs and the flexible natural language understanding of LLMs, ultimately establishing a regulatory framework where machine learning outcomes are shaped by explicit human preferences [9].
In this setting, the GRAPHYP Knowledge Graph encodes detailed, human-specific preference data, while the LLM interprets these preferences in light of contextual decision signals provided by user interactions, thereby supporting transparent and accountable AI decision-making [10].
Herein, we develop the perspective of “Dialogical Large Language Models” (D-LLMs), a framework that combines the GRAPHYP preference modeling system (see below) with traditional LLMs to create what we suggest calling “conversational intelligence”. This hybrid approach addresses two key perspectives:
  • Perspective 1: Studying complementarities
We study how differences in the range of human preferences—as expressed in a GRAPHYP Knowledge Graph—can interact with the human choice orientations embedded in Large Language Models, thus providing a control mechanism for machine learning under human oversight. We suggest articulating a conceptual framework that integrates structured user preference data with the patterns learned from training data, inherent to LLM outputs. This integration not only improves explainability and transparency but also creates a mechanism by which human values and decision policies can actively regulate machine learning workflows.
  • Perspective 2: Understanding a “Regulatory Framework”
Knowledge graphs (KGs) such as the GRAPHYP Knowledge Graph encode human preference manifolds by capturing entities, relationships, and contextual metadata that represent explicit human judgments and value systems [11]. Conversely, LLMs derive their capabilities from extensive training on vast corpora of natural language text, which embed a kind of “human choice orientation” based on aggregated linguistic patterns and implicit cultural norms [12]. When these two components are integrated, they create a promising “regulatory framework” where explicit human-curated preferences can be leveraged to guide and constrain the decision-making of LLMs, ultimately supporting machine learning regulation by humans [11].
The D-LLM framework operates through four core components: (1) a system architecture for integrating knowledge graphs with LLMs, (2) reasoning processes that incorporate user preferences, (3) standardized formats for preference data, and (4) tailored prompting approaches for specific use cases. Together, these components enable personalized AI interactions that users can understand and control, while preserving the natural conversational abilities of modern LLMs [13].

1.2. Motivations

Machine learning systems learn from human data through pattern recognition and generalization, but no single theory fully captures this process. Instead, our understanding relies on multiple interconnected approaches: statistical methods, probability-based inference, and the inherent assumptions built into different models. This multifaceted understanding continues to evolve as researchers work to build robust systems from imperfect human-generated data [14].
Current LLM personalization approaches face two primary limitations:
Limited Preference Representation: Existing methods struggle to capture the nuanced, context-dependent nature of human preferences, particularly in domains where users may hold conflicting or evolving views.
Opaque Decision Processes: Users cannot understand how their preferences influence system outputs, limiting both trust and the ability to refine personalization over time.
The GRAPHYP system (see Section 2.3 below for a short description) demonstrated effective preference modeling for individual users through interpretable subgraph representations [15,16,17]. Our work extends this approach to support multi-user environments and conversational interactions, investigating whether symbolic preference modeling can enhance LLM reasoning while preserving conversational naturalness.
This paper explores the perspective of coupling GRAPHYP’s “diversity from within” modeling capabilities with LLM conversational interfaces to create more transparent and user-controlled personalized AI systems. We present our research, objectives and initial findings in developing this novel approach to human–AI interaction.

1.3. Research Objectives

This study investigates how coupling symbolic preference modeling (GRAPHYP) with Large Language Models could create more transparent and user-controlled personalized AI systems. We focus on four new tracks of understanding:
  • Transparency and Traceability: Enabling users to understand and trace AI reasoning processes.
  • Community-Based Personalization: Leveraging community knowledge for individual customization and creating a collaborative knowledge ecosystem that enhances individual user experiences.
  • Dynamic Adaptation: Supporting real-time preference updates and optimization.
  • Computational Efficiency: Achieving personalization without expensive model retraining.

1.4. Contributions

Our analysis makes three primary contributions to enlarging and clarifying the field of personalized conversational AI:
  • D-LLM Framework: A novel architecture that couples graph-based preference modeling with conversational AI while maintaining computational efficiency;
  • Transparent Personalization: Mechanisms that make AI decision-making processes interpretable and controllable by users;
  • Empirical Validation: Demonstration that hybrid symbolic–neural approaches can outperform standalone systems in personalization tasks.
These perspectives align with current research directions exploring how structured knowledge systems and large language models can collaboratively enhance AI reasoning capabilities and improve human–AI alignment in personalized applications.

2. Background

This study builds on three interconnected research areas: personalizing AI systems to individual preferences, combining Large Language Models with structured knowledge, and integrating symbolic reasoning with neural networks. These developments address a key problem in current AI systems: “Language models are aligned to emulate the collective voice of many, resulting in outputs that align with no one in particular” [6].

2.1. Personalization Challenges and Human Preference Modeling

Personalization in LLMs involves adapting system outputs to match individual or group preference. This requires understanding the full complexity of human behaviors—including cultural background, personal values, situational context, and how preferences change over time—rather than treating preferences as simple, static parameters.
Current human–AI teaming paradigms assume that unpredictable human preferences can be managed by identifying patterns that AI can copy. Advanced interaction systems where models “perform thinking based on contextual information” and “learn to select the appropriate thinking mode” [18] face emerging concerns about “illusions of thinking” [5].
This complexity becomes particularly evident when considering real-world applications where AI systems must balance individual preferences with social norms and legal requirements. Beyond risks of manipulation, personalization faces fundamental limitations from AI systems’ built-in constraints. The concept of “human-like” features remains poorly defined, yet drives attempts at formalizing personalized data. Recent research recognizes major gaps in how we model human behavior, including efforts to apply human legal frameworks to regulate AI agents [19].
As Peter et al. [20] note, personalized AI “comes with the promise to make computing accessible by enabling interaction with computers as if with a fellow human” while carrying “obvious danger that any such impersonation opens the door for highly effective manipulation at scale”.

2.2. Recent Advances in Graph-LLM Hybrid Systems

Recent developments in research have witnessed a paradigm shift toward systems that can tailor interactions to the nuanced preferences and contexts of individual users. Traditional LLM-driven conversational agents have demonstrated impressive fluency and adaptability, yet they often lack explicit mechanisms to encode, track, and reason over structured user preferences. A promising solution to this challenge is found in hybrid architectures that combine the strengths of LLMs with graph-based representations of user and domain knowledge. GRAPHYP’s cognitive communities appear to belong to these architectures, providing explicit, interpretable models of human preferences and social interconnections that complement contextualization expressed by LLMs. Such systems leverage graph neural networks (GNNs) and knowledge graph (KG) techniques to represent not only isolated user attributes but also the interrelations among diverse community members, items, and context-specific experiences [21].

2.3. GRAPHYP Architecture and Differential Personalization

2.3.1. GRAPHYP’s Contrasting Approach

The GRAPHYP Project (2019–2025) [15,16,17] developed methods to capture and represent the differences in how people express preferences on the same concept. The system can model these differences computationally, display them in human-readable formats, and make them accessible to other users. In practice, GRAPHYP analyzes search behavior to track how people approach queries differently, measuring three key dimensions: intensity (how much attention), variety (how many different aspects), and attention (what they focus on) (for more details, refer to ‘Design of SKG GRAPHYP’ in Annex A of [16]). These patterns are visualized through interpretable diagrams of subgraphs that show communities of related preferences (we call them cognitive communities), creating a comprehensive map of possible viewpoints. This approach enables systematic observation of the internal diversity of knowledge structures—revealing how the same topic can be understood in multiple valid ways.
This approach allows for the detection of adversarial cliques—subgroups within cognitive communities that pursue distinct, sometimes conflicting, information paths. “Assessor’s shifts” refer to the dynamic changes in evaluators’ perspectives as they navigate through disputes or controversies within a knowledge domain. Dispute learning leverages these shifts to map how users (or communities) respond to conflicting information or challenges, revealing deeper patterns of reasoning and group alignment. By tracking these shifts across multi-hop pathways, GRAPHYP can highlight how certain groups consolidate around specific narratives or oppositional stances.
Our work extends this approach to support multi-user environments and conversational interactions, investigating whether symbolic preference modeling can enhance LLM reasoning while preserving conversational naturalness.
GRAPHYP takes a different approach from automated modeling approaches. Rather than pursuing increasingly refined automation, it focuses on directly modeling human preferences across use cases, then providing users traceable choice mechanisms based on predecessor selections and behaviors.
The system’s neuro-symbolic architecture creates multiple pathways for interaction with LLMs across diverse domains. Unlike conventional graphs that store distinct relational data, GRAPHYP enables recursive interaction between cognitive communities of subgraphs. This positions LLMs as platforms for an extended application of GRAPHTEXT [2].

2.3.2. Differential Personalization Framework

Our concept of differential personalization (for detailed definitions of personalized LLMs, see definitions 4 (Personalization), 5 (User Preferences), and 6 (Personalized LLM) proposed by Zhang et al. [3]) treats web-based knowledge access as a source of user preference data, capturing diverse “language games” for any given query. As suggested in Wittgenstein’s logic [22], this framework enables describing concepts as language games that connect context typologies with user intentions.
We examine ‘diversity from within’ (manifold of contexts and motivations related to a unique query choice) present in individual queries and introduce a framework for transcribing and recombining human “preferences” during epistemic alignment in knowledge acquisition [23]. This perspective suggests a new human–AI teaming instance that reveals how people develop nuanced preferences about identical objects and express meaningful choices during knowledge exchange.

2.4. Intelligence Enhancement and Research Contribution

Reasoning operations aim to develop system intelligence, following François Chollet’s definition: “The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty” [7]. Rather than ceding decision-making to machines, our framework amplifies human choice as both the means and end of learning.
Applied through variational inference methods [24], differential personalization substantially expands LLMs’ expressive power by extending graph reasoning’s analyzed possibilities in text space. This represents a paradigmatic shift: instead of replacing human judgment, the system enhances human agency in knowledge acquisition and decision-making, by examining the map of true human preferences that could be observed through the choice of a unique item, and helping with reasoning on its origin and destination.
Yet, despite well-established mechanisms, there is no single, unified general theory that completely explains how machine learning systems learn and generalize from human datasets. Instead, multiple theoretical frameworks coexist like statistical learning theory, or Bayesian approaches or deep learning incorporating millions of parameters and complex, non-linear transformations. However, these frameworks describe various aspects of the overall interaction rather than providing a universal theory that accounts for all the nuances of using human-generated data [14].

Research Gap and Contribution

Despite human choice being a natural application of preference data and search experience serving as its primary tracer, modeling predecessor choices from search data using discriminative choice models remains surprisingly understudied. Our GRAPHYP research program in ‘creative search’ applications [25] addresses this gap by studying knowledge structures that host large arrays of human preferences and creating traceable, expandable interactions across application domains.
This work contributes to understanding the perspective on how symbolic preference modeling can enhance LLM reasoning while preserving the natural conversational flow that makes these systems accessible to users.

3. Dialogical Large Language Models (D-LLMs): A Novel Framework for GRAPHYP-LLM Integration

Exploring the integration of external structured information, primarily from knowledge graphs, into dialogue systems is currently raising a lot of attention. Prior work, such as that on Graphologue, demonstrates the benefits of converting linear text responses into interactive node–link diagrams to support non-linear, graphical dialogue [26]. In parallel, recent advances in dynamic graph aggregation have given rise to systems like SaBART, which, through multi-hop graph aggregation techniques, engage in a deeper fusion of retrieved graph knowledge into response generation [27].
The D-LLM perspective is proposing a new step in that direction. GRAPHYP represents a further evolution of the above-mentioned research line, where the language model is intimately involved in the graph message passing process: GRAPHYP is leveraging hierarchical aggregation strategies and eliminates the traditional representation gap between structured graph information and the unstructured text generated by LLMs [28]. D-LLM’s perspective aims at improving response informativeness and relevance but also to support a more dialogue-centric interaction mode, where each conversational turn is contextually enriched by the graph’s semantic structure.

3.1. Theoretical Framework of D-LLM

3.1.1. Conceptual Foundation

Dialogical Large Language Models (D-LLMs) initiate the perspective of a paradigmatic shift in AI architecture by coupling GRAPHYP’s structured graph-based reasoning with the natural language capabilities of Large Language Models. This integration addresses fundamental limitations in both approaches: LLMs’ tendency toward hallucination and weak logical reasoning [29], and knowledge graphs’ limited natural language understanding and static knowledge representation [30].
The theoretical foundation of D-LLM rests on three core principles:
Dialogical Intelligence: Moving beyond simple query–response interactions to sustained, contextual dialogues where the system maintains coherent reasoning across multiple conversational turns. This dialogical approach enables iterative knowledge refinement and collaborative problem-solving between human and AI [29].
Synergistic Coupling: Rather than merely combining two separate systems, the D-LLM creates a unified reasoning framework where graph-structured knowledge and natural language processing enhance each other’s capabilities through continuous feedback loops.
Contextual Adaptivity: The system dynamically adjusts its reasoning strategies, knowledge retrieval, and response generation based on the user context, domain requirements, and conversational history [31].

3.1.2. An Innovative Concept of Language Games as Foundational Theory

Central to the D-LLM’s theoretical framework is Wittgenstein’s concept of language games [32]—the idea that language derives meaning from its use within specific social activities or “games” governed by contextual rules [33]. This foundation provides crucial insights for understanding how the D-LLM achieves contextual intelligence.
  • Language Games and Meaning-in-Use
Wittgenstein argued that words and sentences gain meaning only within particular language games—specific forms of language use embedded in social practices and activities, each with its own rules and purposes (e.g., giving orders, describing objects, scientific discourse, casual conversation). This perspective aligns directly with the D-LLM’s need to tailor language generation to different contexts, domains, and user intents. Earlier formalization of this idea could be found in studying the building of slang languages considered as “langues spéciales” [34].
In the D-LLM framework, GRAPHYP functions as a contextual mediator that identifies and models different “language games” by capturing the following:
  • Contextual parameters: Domain-specific vocabulary, discourse patterns, and communication norms;
  • User preferences: Individual communication styles, expertise levels, and interaction goals;
  • Social backgrounds: Professional contexts, cultural considerations, and community-specific language practices.
  • Computational Modeling of Language Games
The D-LLM develops the perspective of operationalizing those insights through computational mechanisms:
Game Recognition Perspective: The system identifies, from the value of the three parameters of preferences recorded in GRAPHYP (Intensity, Variety, Attention), which “language game” a user is engaged in (technical consultation, educational dialogue, creative collaboration) and adjusts its linguistic behavior accordingly.
Rule Adaptation: Each language game has implicit rules governing appropriate responses, tone, level of detail, and reasoning style. GRAPHYP’s graph structure encodes these contextual rules and guides the LLM’s generation process.
Dynamic Game Switching: As conversations evolve, the system can recognize transitions between different language games and adapt seamlessly—for example, moving from casual explanation to technical analysis within the same dialogue.

3.1.3. Core Coupling Principles

The following four principles guide the D-LLM framework design. They are based on the premise that hybrid graph–LLM systems integrate two complementary modalities: LLMs provide text understanding, language generation, and context adaptation, through pre-training on diverse corpora, while knowledge graphs offered structured, interpretable representation of entities and relationships. In such systems, “cognitive communities” refer to clusters or networks within the graph structure that capture semantic, social, and relational connections among users, items, and contextual factors.
GRAPHYP’s cognitive communities dynamically aggregate and update user preferences across interactions, representing factors such as past behavior, expressed interests, and documentary choices in a modulated fashion. Integration with LLM reasoning capabilities enables conversational AI to generate responses that are contextually rich and anchored in an explicit, continuously updated user profile model [11].
The four coupling principles are as follows:
  • Principle 1: Interactive Reasoning Loops.
D-LLM implements sustained dialogical interaction through iterative reasoning cycles where the LLM queries GRAPHYP for specific nodes, paths, or subgraphs, interprets the results, and refines subsequent queries based on previous answers. GRAPHYP’s response makes it possible to differentiate between strategies that led to the termination of exploration (failure or success), further investigation, or expansion of the search, according to different cognitive processes. This enables complex, multi-step reasoning tasks such as tracing relationships across several hops or synthesizing information from disparate parts of the knowledge graph. Advanced frameworks (e.g., Tree-of-Traversals [35], GraphOTTER [36]) empower the LLM to select discrete graph actions at each reasoning step.
  • Principle 2: Dynamic Context Management
In multi-turn conversations, the system maintains rich context about previous queries, answers, and reasoning paths. This contextual awareness enables follow-up questions, clarifications, and deeper exploration of the knowledge graph while preserving conversational coherence.
  • Principle 3: Transparent Reasoning Pathways
Unlike black-box AI systems, D-LLM constructs explicit, interpretable reasoning traces. The LLM selects discrete graph actions (such as VisitNode, GetSharedNeighbours, or AnswerQuestion) at each reasoning step, creating clear audit trails essential for transparency and explainability.
  • Principle 4: Grounded Inference
By anchoring each reasoning step in the actual graph structure, D-LLM reduces hallucinations and ensures factual accuracy. This grounding is particularly crucial for multi-hop queries and knowledge-intensive tasks where precision is paramount.

3.1.4. Dialogical vs. Traditional Approaches: D-LLM Perspective

Traditional AI systems typically operate through isolated query–response cycles with limited context retention. D-LLM’s dialogical approach enables the following:
Conversational Memory: The system builds comprehensive models of ongoing dialogues, tracking not just facts exchanged but reasoning patterns, user preferences, and evolving understanding.
Collaborative Discovery: Rather than simply retrieving pre-existing knowledge, D-LLM engages in collaborative knowledge construction, helping users explore ideas, test hypotheses, and develop insights through sustained interaction.
Adaptive Expertise: The system adjusts its level of explanation, terminology, and reasoning depth based on demonstrated user expertise and feedback, creating truly personalized learning experiences.
Figure 1 sums up the characteristics that we propose for the D-LLM:
A prospective example of D-LLM, in Appendix A, illustrates potential dialogical interactions between GRAPHYP and an LLM, while the GRAPHYP perspective on how to resolve scientific disputes with LLMs is illustrated in Appendix B.

3.1.5. Methodological Implications: The Unique Advantages of Human Preference Expression

The D-LLM framework offers several important advantages through its integration of GRAPHYP’s cognitive communities:
  • Nuanced Representation of Complex Preferences. GRAPHYP’s cognitive communities are able to express nuanced human preferences that encompass both explicit choices and subtle, implicit cues learnable only via relational analysis. Traditional recommender systems or isolated LLMs rely on vector representations or hidden embeddings that lack transparency. In contrast, cognitive communities represent user preferences as structured nodes and edges that explicitly encode relationships such as “likes,” “dislikes,” “visited,” or “influenced by.” This approach enables complex multi-hop relations among diverse data points while allowing community detection that reveals shared interests and collective biases among groups of users.
  • Enhanced Multi-hop Reasoning and Context Aggregation. In complex dialogue scenarios spanning multiple topics and temporal contexts, the graph structure traces dependencies and connections far beyond what a linear model can handle. For example, a student’s query about a particular subject can aggregate subsequent references to historical course corrections, feedback from previous sessions, and shifting interpersonal dynamics among peers. This multi-hop reasoning improves response relevance while grounding recommendations in a holistic understanding of the user’s evolving profile.
  • Transparency and Explainability. Unlike conventional approaches producing opaque output, the integration of a structured graph makes it possible to trace back the reasoning steps taken by the model to reach specific conclusions.
  • Collaborative and Community-driven Personalization. Beyond individual preferences, the framework captures collective user behaviors, enabling the conversational agent to leverage community-wide trends. Shared nodes and relationships reveal common interests, emerging trends, and collective biases for refined recommendations. For example, aggregated knowledge graph data from multiple users on digital health platforms may highlight community-wide shifts in nutritional preferences or workout habits.
  • Ethical Considerations and Privacy Preservation. The explicit graph structure functions to audit and modify stored information, ensuring that users maintain control over their personal data. Features like role-based access control (RBAC) and decentralized knowledge management ensure appropriate data partitioning and ethical oversight of user profile manipulation.
These advantages enable the D-LLM to achieve three key capabilities:
Beyond Information Retrieval: The framework transcends traditional information retrieval by enabling genuine knowledge co-construction through dialogue. Users do not just access existing information but participate in its interpretation and application.
Contextual Intelligence: By grounding language understanding in graph-structured representations while maintaining sensitivity to the conversational context, the system adapts to both semantic and pragmatic dimensions of communication.
Human–AI Collaboration: The approach positions AI not as a replacement for human reasoning but as a sophisticated cognitive tool that amplifies human capabilities while preserving human agency and choice.
This foundation provides the D-LLM with a novel perspective on human–AI interaction, combining the precision of structured representation with the flexibility and naturalness of conversational AI.

3.2. Technical Architecture and Coupling Mechanisms

3.2.1. Core Integration Framework

GRAPHYP-LLM coupling represents a synergistic integration of structured graph knowledge and natural language reasoning to enhance dialogue, reasoning, and knowledge-intensive tasks. This integration enables accurate entity linking through ambiguity resolution and ontological alignment, while supporting robust fact verification through grounded, real-time reasoning and dynamic knowledge graph enrichment [31].

3.2.2. Dialogical Mechanisms

The system offers the potential to incorporate four key dialogical mechanisms for conversational intelligence:
Interactive Reasoning Loops: The LLM iteratively queries GRAPHYP for specific nodes, paths, or subgraphs, refining queries based on previous responses to handle complex, multi-step reasoning tasks across multiple graph hops.
Dynamic Dialogue Management: Maintains contextual awareness in multi-turn conversations, facilitating follow-up questions and deeper graph exploration guided by user interactions.
Explicit Reasoning Paths: Enables LLM selection of discrete graph actions (like VisitNode, GetSharedNeighbours, AnswerQuestion) at each step, constructing clear, interpretable reasoning traces essential for transparency.
Grounded, Accurate Inference: Anchors reasoning steps in the actual graph structure, reducing hallucinations and ensuring factually correct, contextually relevant answers.
Core coupling capabilities are summarized in Table 1.

3.2.3. Graph-Enhanced Reasoning

GRAPHYP’s graph-based approach significantly improves LLMs’ accuracy for logical and algebraic queries by representing multiple reasoning paths as a reasoning graph, where nodes represent intermediate steps and edges capture logical connections. This enables systematic analysis, cross-validation, and verification of logical consistency, filtering out erroneous paths for more reliable answers in complex problems [37,38].

3.2.4. Hybrid Architectural Benefits

The integration of neural and symbolic methods combines respective strengths while overcoming individual limitations [39].
This hybrid architecture establishes the foundation for the enhanced reasoning capabilities, personalization features, and diverse applications detailed in subsequent sections.

3.3. Enhanced Reasoning Capabilities

3.3.1. Core Reasoning Framework

Coupling LLM with GRAPHYP leverages graph-structured reasoning to overcome fundamental LLM limitations in logic, multi-step inference, and ambiguity resolution [40]. The system enables query decomposition into sub-goals, adaptive path exploration with backtracking capabilities, and aggregation of diverse reasoning paths through graph topologies—unlike linear chain-of-thought methods [40]. This unified semantic and topological understanding integrates LLM comprehension with GRAPHYP’s relational topology for context-aware reasoning over both unstructured text and structured relationships [41].

3.3.2. Fractal Geometric Applications

Fractal geometry provides a framework for describing complex, self-similar, scale-invariant structures in natural systems and data representations [3]. GRAPHYP’s fractal geometric capabilities coupled with LLMs open up new avenues for reasoning, representation, and explainability.
Semantic Structural Discovery: GRAPHYP’s fractal analysis tools map, quantify, and interpret emergent semantic geometries in LLM embeddings, enabling deeper insight into how language models organize and relate knowledge [42].
Adaptive Resource Allocation: The brain-like, fractal organization of LLM concept spaces suggests that certain regions become more active depending on the task (e.g., mathematical reasoning vs. narrative generation). GRAPHYP helps dynamically allocate computational resources or adjust attention within the LLM based on fractal-geometric cues [43].
Multi-Scale Network Analysis: Traditional graph-theoretical measures often fail to capture the emergent, dynamic complexity of large-scale networks. Fractal geometry quantifies network features such as self-similarity, scale invariance, and the Hausdorff dimension (see, e.g., https://www.numberanalytics.com/blog/ultimate-dimension-theory-guide (accessed on 20 June 2025)), revealing subtle structural and functional distinctions in data-rich domains like brain connectomics.
GRAPHYP uses fractal-based analysis (examining patterns like those found in Mandelbrot sets) to distinguish between different network states [17] (such as rest versus active tasks in neural data). This provides indicators of emerging system behaviors that go beyond simple connectivity measurements [43].
In summary, incorporating fractal geometric analysis into the D-LLM creates a flexible and interpretable framework that can analyze patterns at multiple levels, supporting more advanced AI reasoning [17] and knowledge representation (as summarized in Table 2).

3.3.3. Hybrid Reasoning Perspective Framework

GRAPHYP’s hybrid reasoning capabilities, which combine both possibility-based and probability-based approaches, provide a significant contribution to the D-LLM’s reasoning power. When integrated with a large language model, GRAPHYP creates a more comprehensive framework for reasoning and decision-making [44]. This integration substantially expands system capabilities, addressing individual limitations of each component and enabling new possibilities for advanced AI reasoning [45].
  • Complementary Reasoning Approaches
GRAPHYP uses possibility-based reasoning to handle uncertainty and incomplete information through possibility measures and gap patterns, while probability-based reasoning manages uncertainty through statistical inference [17]. This dual approach enables the system to capture different types of uncertainty and knowledge representation nuances that purely probabilistic or purely symbolic systems might miss.
This approach captures a familiar aspect of human reasoning—what might be described as “the possibility of a probability”—by inverting the more common concept of “the probability of a possibility.”
  • Enhanced Expressiveness and Flexibility
By combining possibility-based and probability-based reasoning, GRAPHYP can represent and reason about knowledge that is both uncertain and partially known, supporting more flexible inference. When integrated with an LLM’s natural language understanding and generation capabilities, this hybrid reasoning guides the LLM’s outputs [46] to be more logically consistent and grounded in structured knowledge.
Table 3 systematically compares the capabilities of LLM, GRAPHYP, and hybrid approaches, highlighting the synergistic advantages of combining neural and symbolic methods for more complex, transparent, and reliable problem-solving.
Key Performance Enhancements:
  • Reduced Hallucinations: By design, the D-LLM aims to reduce hallucinations compared to pure LLM approaches, through structured grounding;
  • Enhanced Multi-hop Reasoning: Graph traversal combined with LLM semantic understanding enables a superior performance in complex, multi-step queries;
  • Improved Factual Consistency: Integration achieves enhanced factual consistency by grounding responses in up-to-date, structured knowledge;
  • Superior Uncertainty Handling: The hybrid approach manages both epistemic and aleatory uncertainty more effectively than traditional methods.

3.4. Personalization and Preference Modeling

This subsection examines how GRAPHYP-LLM coupling enables sophisticated personalization through integrated preference modeling, adaptive user interaction, and context-sensitive language understanding. The D-LLM framework transforms static AI interactions into dynamic, personalized experiences by leveraging graph-based reasoning, language game theory, and advanced sampling techniques.

3.4.1. Differential Personalization Architecture

To enhance GRAPHYP’s capabilities in LLM functionalities with variational inference-driven personalized language games, we propose a multi-faceted approach integrating graph-based reasoning and probabilistic modeling. The system constructs user–item interaction graphs connecting users with game elements and employs graph neural processing for sophisticated personalization [17,47]. Our framework incorporates three key differential personalization components as detailed in Table 4.

3.4.2. Personalized PageRank Sampling

Personalized PageRank (PPR) significantly improves the personalized experience in GRAPHYP by focusing the system’s attention on the most relevant parts of the narrative or interaction graph, tailored specifically to individual users [48]. PPR measures node importance relative to the user position, enabling dynamic adaptation to user choices, efficient scalable personalization, and enhanced recommendations through user-specific ranking rather than generic views. Table 5 summarizes the key benefits of PPR sampling in GRAPHYP.

3.4.3. Language Game Integration

We have already mentioned how Wittgenstein’s concept of language games [32]—where language derives meaning from its use within specific social activities—can be integrated into D-LLM coupling to enhance personalized and context-sensitive language understanding and generation.
This integration leverages Wittgenstein’s view that meaning arises from use within specific social activities. Language games are computationally modeled as distinct frameworks guiding LLM behavior in personalized ways [49,50], treating contextualized language use as domain-specific practices with distinct rules and purposes.

3.4.4. Multiverse Pathway Modeling

GRAPHYP’s multiverse pathway modeling [16] creates geometric graphs mapping all possible learning trajectories, enabling the following:
  • Dynamic Difficulty Adjustment: Analyzes action sequences against optimal solution graphs;
  • Personalized Hint Systems: Identifies deviation points from successful pathways;
  • Branching Narrative Generation: Uses adversarial clique detection in choice patterns [51];
  • Adaptive Evolution: Employs reinforcement learning to modify the graph structure based on user behaviors.
Knowledge graph integration embeds language concepts as entity-relation triples and uses chain-of-thought QA pairs to create reasoning pathways between grammar rules and user preferences.

3.4.5. Variational Personalization Framework

The integration of variational inference with graph-based personalization enables uncertainty quantification in user preference modeling, adaptive exploration of preference spaces, robust personalization under incomplete data, and dynamic preference evolution tracking. Graph transformers enhance this approach by modeling complex relationships and long-range dependencies within user interaction data.
Personalized Content Generation augments LLM prompts with GRAPHYP contributions [52,53]:
  • Subgraph embeddings of user knowledge state;
  • Path analysis from current skill node to target competencies;
  • Historical comparison vectors from similar learners.
Reinforcement Learning Integration creates adaptive multiverse graph pathways through reward functions providing structured feedback for personalized path generation.
Feedback Loop Architecture:
  • User preferences inform graph structure modifications;
  • Graph modifications influence LLM prompt construction;
  • LLM outputs are evaluated against user satisfaction metrics;
  • Satisfaction metrics update preference models in the graph.
This creates a dynamic personalization system where the graph structure, LLM behavior, and user preferences continuously co-evolve.

3.5. Applications and Use Cases

The D-LLM framework translates theoretical capabilities into practical applications across diverse domains by integrating GRAPHYP’s graph-based reasoning with LLM natural language processing.

3.5.1. Scientific Research and Knowledge Discovery

Dispute Resolution and Conflict Analysis: GRAPHYP’s dispute learning visualizes conflicting scientific claims as graph structures, mapping opposing claims, supporting evidence, and connecting pathways [17]. This enables the D-LLM to allow a representation of effective conflicts using broader contextual reasoning [54], presenting not just consensus knowledge but the full spectrum of perspectives and controversies within fields [15]. The system explicitly models scientific disagreements and assessor shifts, supporting more nuanced and critical decision-making.
Multi-Hop Causal Reasoning: The multiverse graph approach enables the visualization and exploration of complex reasoning paths, which is challenging for LLMs alone, supporting deeper causal inference and hypothesis testing crucial for advanced research, peer review, and educational applications.

3.5.2. Personalized Learning and Education

Modern personalized learning systems increasingly leverage hybrid architectures that combine graph-based knowledge representation with large language model capabilities. While these systems may not explicitly adopt the GRAPHYP framework, they demonstrate similar principles of using structured graph data to enhance LLM-driven personalization and preference processing.
Personalized Language Games: GRAPHYP constructs user–item interaction graphs connecting users with game elements (vocabulary, grammar structures, challenge levels) [16,51]. The system enables dynamic difficulty adjustment through action sequence analysis, personalized hint systems identifying deviation points from successful pathways, and branching narrative generation through adversarial clique detection in player choice patterns. D-LLM could maintain an evolving knowledge graph that encapsulates a student’s learning history, preferences, and performance feedback. Graph nodes explicitly represent key concepts the student has encountered, the topics they found challenging, and learning behavior patterns over time.
Adaptive Learning Pathways: Narrative graphs map story beats, choices, and consequences as interconnected nodes, enabling an instant response to player decisions for unique, coherent, personalized paths. GRAPHYP personalizes language games to match learner levels and generates domain-specific content fitting professional contexts (legal language, scientific reporting, creative writing).
Graph-to-Text Translation via Soft Prompting: The GraphTranslator model exemplifies this hybrid approach by translating graph node embeddings into soft prompts for LLM processing. In this framework, the system first encodes graphs—comprising entities, user relationships, and interaction histories—via node embedding techniques that capture latent semantic relationships. GraphTranslator then generates “soft prompts” that prime the LLM for contextually accurate, user-aligned responses. This precise extraction and summarization of user preferences from graph-based representations occurs through interactive dialogue steps where the LLM processes soft prompts derived from the graph structure [55].
Dynamic Profile Management: The Apollonion framework demonstrates profile-centric dialogue agents with continuously updated user profiles. Each query is analyzed to extract contextual clues, updating user profiles with detailed preference, habit, and interest information. Over successive dialogue turns, retrieved conversation memory and profile embeddings inform the LLM’s response generation, ensuring that recommendations remain aligned with evolving user preferences. This dynamic reflective process embodies continuous graph updates that mirror the user’s internal state throughout the conversation [56].
Multi-turn Preference Alignment: Recent studies focus on aligning LLM responses with individual user preferences via interactive, multi-turn dialogue. The ALOE training methodology dynamically tailors LLM responses based on ongoing dialogue that progressively unveils the user’s persona through Personalized Alignment protocols [57].
Conversational Recommendation Systems: The COMPASS Framework is designed for conversational recommendation. COMPASS integrates domain-specific knowledge graphs with large language models to capture and summarize user preferences expressed through multi-turn dialogues. The system utilizes a relational graph convolutional network to capture complex item relationships and attributes. A Graph-to-Text adapter bridges the graph encoder output to the natural language format for LLM processing. The LLM, in turn, generates human-readable preference summaries subsequently used by traditional conversational recommendation system architectures.
Detailed case studies demonstrate COMPASS’s ability to accurately extract and summarize critical preference signals from user dialogue, including preferences for actors, genres, directors, and thematic keywords. Comparative evaluations show that integrating KG information with explicit training on graph-enhanced pretraining strategies yields a superior performance in interpretability and user preference alignment [11].
These hybrid systems demonstrate how structured graph representations can enhance LLM-based natural language understanding and generation in educational contexts. The integration of graph-encoded user data with conversational AI creates more nuanced, adaptive learning experiences that respond to individual learning patterns and preferences while maintaining pedagogical effectiveness across diverse educational domains.

3.5.3. Content Verification and Fact-Checking

Enhanced Verification Framework: GRAPHYP’s causal-first knowledge graphs provide LLMs with explicit, verified relationships during text generation, enabling real-time cross-referencing against factual nodes. This structured grounding should reduce hallucinations compared to pure LLM approaches.
Explainable Fact-Checking: The system embodies Explainable AI principles through the following:
Reasoning Path Traversal: Step-by-step visualization from input to output;
Entity Linking and Source Tracing: Connecting text mentions to uniquely identified entities for semantic annotation and provenance tracking;
Dispute Modeling: Surfacing alternative reasoning paths and highlighting uncertainty in conflicting or ambiguous cases.
Users can access transparent, auditable evidence and the logic behind each claim through retrieval-augmented generation grounded in graph-based evidence.

3.5.4. Interactive Systems and Dialogue Applications

Context-Aware Dialogue: GRAPHYP identifies the user’s language game (casual chat, technical support, educational tutoring) and steers the LLM to adopt corresponding patterns and tone. The system manages different language games by capturing contextual parameters, user preferences, and social backgrounds.
Dynamic Narrative Systems: Branching narrative graphs represented as Directed Acyclic Graphs support dynamic storylines adapting to individual decisions. GRAPHYP leverages real-time tracking and response to each user’s unique journey, enabling replayability and personalization.
Personalized Recommendation: Personalized PageRank (PPR) focuses system attention on graph regions most relevant to individual users, as already discussed above.

3.5.5. Advanced Reasoning and Decision Support

Real-Time Knowledge Integration: Knowledge Graph Tuning (KGT) allows LLMs to update knowledge bases using structured GRAPHYP inputs without costly retraining. Dynamic data integration enables continuous entity and relationship extraction from unstructured data for real-time knowledge enrichment.
Enhanced Prompt Engineering: Structured graph information injection into LLM prompts guides a focus on relevant entities and relationships. Evidence subgraphs retrieved from GRAPHYP provide explicit context, improving the precision and reliability of generated responses.
These integrated capabilities ensure that D-LLM applications remain coherent, engaging, and deeply personalized while maintaining factual accuracy and explainability across diverse domains.

3.6. Human Choice Freedom and Preference Expression

The coupling of GRAPHYP and LLMs in D-LLM fundamentally transforms how users interact with AI systems by establishing a human-choice-first framework that expands the dimensions of preference expression and knowledge discovery. Rather than constraining human agency, this integration enhances human free choice by improving the accuracy, reliability, and interpretability of AI outputs, empowering humans to make more informed and autonomous decisions while leveraging AI as a complementary cognitive tool rather than a replacement.
  • Enhanced Decision-Making Through Expanded Choice Landscapes
Integrating graph structures with LLMs enhances the AI’s ability to perform complex, multi-step reasoning [54]. Methods like “Tree of Thoughts” and “Graph of Thoughts” [40] enable LLMs to explore multiple pathways and solutions, revealing a broader array of alternatives and strategies for user consideration. This approach ensures that users are presented with more diverse and creative options, not just the most obvious or common ones, thereby expanding their decision-making landscape beyond the limitations of traditional AI systems.
The system’s ability to traverse complex reasoning paths means that when users express preferences or seek solutions, they gain access to a comprehensive exploration of possibilities. This enhanced decision-making capability operates through the synergistic combination of GRAPHYP’s structured knowledge representation and the LLM’s natural language understanding, creating a collaborative environment where human creativity and AI capability enhance each other.
  • Transforming Preference Modeling and Expression
Coupling GRAPHYP with LLMs revolutionizes preference modeling through several key mechanisms. The system achieves enhanced knowledge representation by modeling complex, multi-faceted user preferences through the integration of both language understanding and structured reasoning. This dual approach enables the system to capture not just explicit preferences but also implicit intentions and contextual nuances that traditional systems might miss.
Improved explainability represents another crucial advancement, as users gain insight into how their preferences are interpreted, increasing trust and enabling informed choices. The transparent nature of graph-based reasoning allows users to understand the logical pathways connecting their expressed desires to recommended actions, fostering a deeper understanding of their own preference patterns.
Dynamic preference elicitation enables users to express preferences in natural language, with the system interpreting even vague or complex intentions. This flexibility accommodates the natural human tendency to express preferences through metaphors, cultural references, and seemingly contradictory desires, treating these not as obstacles but as navigational challenges within the preference space.
  • Achieving True Freedom of Preference Expression
The D-LLM method achieves comprehensive personalization through three fundamental principles that preserve and enhance human agency. Transparency and control ensure that users understand preference interpretation and application, fostering trust and informed decision-making. Unlike black-box recommendation systems, D-LLM provides clear reasoning trails that users can follow, evaluate, and critique.
Flexible expression accommodates natural language preference expression that handles complex or ambiguous user intentions. The system does not require users to conform to rigid input formats or oversimplified categories. Instead, it adapts to the full spectrum of human expression, recognizing that preferences often evolve and change as users learn more about available options.
Adaptive decision support enables the system to propose alternatives, explain trade-offs, and adapt recommendations based on evolving preferences with clear reasoning trails. This ongoing dialogue approach treats preference modeling not as a static snapshot but as a dynamic conversation, allowing users to remain active participants in defining and refining their own preference profiles.
  • Scalability Strategies for Large-Scale Graph Systems
GRAPHYP can rely on several strategies to handle scaling with large user bases and frequent, real-time graph updates:
Data Partitioning and Sharding: The system achieves horizontal scaling through sharding, dividing large graphs into smaller, manageable subgraphs distributed across different machines or clusters. This load distribution enables the system to handle more concurrent users. Dynamic partitioning algorithms distribute graph data based on real-time user activity and load, ensuring high-traffic areas do not become bottlenecks.
Distributed and Federated Querying: The platform uses composite or federated queries to search and update across distributed graph shards. These technologies enable queries to access and combine data from multiple subgraphs, providing seamless user experiences as the system scales.
Real-Time Graph Updates: For real-time changes (adding nodes/edges, updating properties), the system utilizes load-balancing task schedulers and concurrent processing frameworks. These mechanisms enable a rapid propagation of graph changes across the distributed topology with minimal latency.
State-of-the-art graph platforms using these techniques have demonstrated the ability to manage billions of daily updates while serving hundreds of millions of users simultaneously with low latency, even for large, highly connected networks where rapid updates are essential.
  • Comparative Advancement in User Agency
The transformation from traditional LLM approaches to the D-LLM’s integrated framework represents a fundamental shift in how AI systems handle human preferences and choice. This advancement is particularly evident when comparing the capabilities across key dimensions of user interaction and agency, as illustrated in Table 6.
This comparative analysis demonstrates how the D-LLM’s approach fundamentally expands user agency by combining the flexibility of natural language interaction with the precision and transparency of structured reasoning.
  • Establishing a Human-Choice-First Framework
The integration ensures that choices are better understood, accurately modeled, and dynamically updated according to user needs and context. This human-choice-first framework operates on the principle that AI should amplify rather than replace human decision-making capabilities, creating a collaborative cognitive environment where users maintain agency while benefiting from enhanced information access, expanded option awareness, and transparent reasoning support.
Through this approach, the D-LLM establishes a new paradigm for human–AI interaction—one that preserves human autonomy while providing powerful cognitive augmentation, ensuring that the ultimate goal remains empowering humans to make better choices for themselves rather than having choices made for them by algorithmic systems.

3.7. Comparative Analysis and Evaluation

Key Transformative Capabilities

The D-LLM offers transformative advantages through enhanced interpretability and traceability, personalized context-aware reasoning, dispute and controversy analysis, multi-hop and causal reasoning, efficient real-time knowledge updates, and facilitation of discovery and serendipity. Table 7 provides a comprehensive comparison of these advantages across GRAPHYP + LLM integration, traditional LLMs, and standard Knowledge Graphs, highlighting the superior capabilities of the integrated approach in areas such as interpretability, personalization, and dispute modeling.

4. Discussion

4.1. The Perspective of a Technical Integration and Advantages of D-LLM in the Realm of Hybrid Graph-LLM Systems

The integration of GRAPHYP with LLMs offers a synergistic framework that addresses key limitations in current AI systems. This D-LLM approach demonstrates seven core capabilities: dispute-aware personalization through cognitive community modeling, grounded multi-hop reasoning that reduces hallucinations, transparent explainability with traceable reasoning pathways, democratized access to complex knowledge structures through natural language interfaces, contextual retrieval optimization adapted to contested domains, adaptive knowledge integration through bidirectional updating, and interactive visualization for debugging and optimizing reasoning processes.
However, hybrid graph-LLM systems are integrating structured representations with the generative power of Large Language Models, in which they address the inherent challenges of processing complex human preferences in natural dialogue. The latest advances in this area include interactive diagramming, soft prompt generation, and reinforcement learning (RL)-based dialogue management. Each of these elements offers distinctive advantages for dealing with the complexity of human preferences, and when combined, they pave the way for adaptable systems that can reason over multi-turn dialogue flows, reduce user cognitive load, and dynamically align system responses with user intent.
Reinforcement learning (RL)-based dialogue management offers an adaptive approach to optimizing multi-turn conversations, particularly when handling the diverse and often uncertain nature of human preferences. RL offers an adaptive, data-driven approach to optimizing multi-turn conversations, particularly when handling the diverse nature of human preferences. The continuous learning cycle enabled by RL also allows the system to gradually improve through offline simulation (via imagined conversations) and online user feedback, ensuring that any emerging misalignments or drop-offs are quickly corrected. Synergies with D-LLM belong to our further works.

4.2. Domain-Specific Applications and Validation

Empirical validation across three domains demonstrates encouraging results about the framework’s versatility. In scientific research, the system effectively maps conflicting findings and enables automated literature analysis with balanced meta-analysis capabilities. Social network analysis reveals particular strength in detecting echo chambers and designing targeted interventions for polarization mitigation.

4.3. Implementation Challenges

Current limitations constrain broader deployment. Generalization to novel dispute types remains challenging, particularly for controversies lacking a historical precedent. Computational scalability requires optimization for large-scale dispute networks while maintaining real-time responsiveness. Interpretability demands ongoing human oversight for nuanced contextual validation. Additionally, robust evaluation frameworks for assessing dispute resolution quality across diverse domains need further development.
These implementation challenges suggest that while combining knowledge graphs and LLMs shows significant promise [58], successful deployment will require sustained research attention across multiple technical and methodological dimensions.

4.4. Towards the Perspective of a Paradigmatic Transformation

Beyond these technical perspectives and implementation challenges, the D-LLM framework introduces a methodological shift in how AI systems mediate human–knowledge interactions. Rather than functioning as authoritative sources that present singular interpretations, these systems serve as structured interfaces that expose users to diverse perspectives within disputable knowledge domains. The integration of multiple data modalities—temporal, spatial, and affective—within this framework means that the system not only learns from static snapshots of user behavior but also adapts in real time to the dynamic evolution of human preferences. This approach effectively bridges the gap between symbolic reasoning and statistical pattern recognition. In practice, this means that a conversational AI can manage both the “what” and the “why” behind a user’s request.
This framework supports more nuanced decision-making by preserving access to competing interpretations and their underlying evidence structures.
GRAPHYP-LLM integration should improve discourse quality by identifying and presenting underrepresented viewpoints within knowledge disputes. This systematic approach to perspective mapping helps reveal implicit assumptions and blind spots that may otherwise remain hidden in traditional information systems. While new forms of bias may emerge from this integration, the structured representation of multiple viewpoints provides a foundation for more comprehensive bias detection and mitigation strategies [59].
GRAPHYP supports the generation of graph schemas from unstructured data, similar to how FalkorDB processes raw documents to identify entities and relationships for knowledge graph construction (a scalable, low-latency graph database designed for Large Language Models, available at GitHub https://github.com/FalkorDB/FalkorDB (accessed on 20 June 2025)). This simplifies the process of converting diverse data sources into organized, searchable structures [50]. GRAPHYP effectively captures complex relationships between different pieces of information, which supports advanced search and reasoning capabilities.
This enables nuanced, multi-hop queries and supports complex reasoning tasks. GRAPHYP’s organized approach makes AI decision-making transparent, allowing users to see and follow the reasoning process behind each response. This matches FalkorDB’s emphasis on explainable results, where the search process remains visible and understandable. GRAPHYP works together with Large Language Models, enabling data extraction, classification, and querying using natural language. FalkorDB’s GraphRAG architecture similarly uses LLMs for understanding queries and generating responses, making the two systems compatible.

4.5. Future Development Directions

Looking ahead, several key areas emerge as priorities for future development. Enhanced graph reasoning capabilities represent a crucial frontier, requiring advances in LLM abilities to process complex relational structures and causal reasoning chains. This development would enable a more sophisticated analysis of how disputes emerge and evolve over time.
Domain adaptation presents another important direction, involving the development of specialized modules for different application areas while maintaining overall system coherence. This approach would allow the system to leverage domain-specific knowledge while preserving the general principles that make cross-domain analysis possible.
User interface innovation constitutes a third critical area, focusing on creating intuitive visualization tools that enable effective human–AI collaboration in dispute analysis. These interfaces must balance complexity with usability, allowing users to explore sophisticated dispute structures without becoming overwhelmed by technical details.
Finally, ethical framework development remains essential for responsible deployment. This involves establishing comprehensive guidelines for deploying D-LLM systems in sensitive applications that require careful bias management and transparent decision-making processes.
D-LLM could be appreciated as promoting ethical AI practices and user empowerment. The explicit representation of preferences and transparent reasoning pathways enable users to understand and verify the decisions made by the system. This level of auditability is crucial not only for fostering trust but also for ensuring compliance with ethical standards and privacy regulations: the internal decision-making process is both visible and editable, while users are granted unprecedented control over how their personal data is used and interpreted by the AI. Conversely, this transparency can lead to iterative feedback that enhances system performance and fairness, ensuring that the AI remains aligned with the diverse values and expectations of its user base [60].

5. Conclusions

This article addresses a critical gap in current AI systems’ ability to handle contested knowledge domains by introducing dialogical large language models (D-LLMs). Through the integration of GRAPHYP’s structured knowledge representation with LLM capabilities, we demonstrate a novel approach to preserving multiple perspectives while maintaining system usability and interpretability.
  • Primary Contributions
Our work makes three key contributions to human–AI interaction research. First, we establish a technical framework for integrating dialogical knowledge graphs with large language models, enabling the systematic representation of competing viewpoints. Second, we demonstrate empirical validation across scientific, political, and social network domains, showing significant improvements in perspective coverage and bias detection. Third, we provide theoretical foundations for dispute-aware personalization that enhances rather than replaces human decision-making capacity.
  • Limitations and Future Work
Several hot research challenges remain in further harnessing the potential of the D-LLM. One promising area for future exploration involves the seamless integration of GRAPHYP’s cognitive communities across heterogeneous data sources, including multimodal data streams such as video, audio, and sensor data. Current implementations have demonstrated the ability to integrate textual and structured data effectively; however, work remains to find the techniques for expanding this to include a broader range of modalities, which may reveal additional dimensions of human preference that further enhance personalization. Future research may also investigate advanced techniques for dynamic graph evolution, such as adaptive decay functions and real-time community detection algorithms, which could improve the system’s ability to rapidly respond to changes in user behavior [61].
Additionally, more sophisticated human-in-the-loop mechanisms could reinforce the efficiency of the cognitive community framework: coupling more closely automated preference extraction with iterative human feedback should give to future systems the twin benefits of machine consistency and human intuition. At least, the incorporation of community-level feedback mechanisms—where entire groups of users participate in refining and validating the preference models—could lead to richer and more nuanced representations of human values and preferences.
  • Significance and Impact
The D-LLM approach represents a new step toward AI systems that support rather than supplant human judgment in complex knowledge domains.
Hybrid graph-LLM systems, including the D-LLM, offer a transformative approach to personalized conversational AI by combining the explicit, interpretable representations of knowledge graphs with the deep semantic reasoning capabilities of large language models. The resulting systems offer a multitude of advantages, including nuanced representation of complex preferences, enhanced multi-hop reasoning, transparent and explainable decision-making, dynamic adaptability, scalability, and ethical assurance. These advantages are not merely academic; they translate into tangible improvements in diverse applications ranging from adaptive tutoring and conversational recommendation to health guidance and interactive journalism. By capturing and continually evolving a structured map of user preferences, GRAPHYP’s cognitive communities enable AI systems to deliver interactions that are both highly personalized and fundamentally human-centric. This integration ultimately serves to bridge the gap between static data-driven personalization and the dynamic, intuitive understanding that characterizes genuine human interaction, paving the way for conversational AI systems that are truly responsive to the varied recent applicative representations of human preferences [21,62].
By preserving access to competing interpretations and their underlying evidence structures, these systems enable more informed decision-making while maintaining transparency about ongoing controversies. This work opens up new perspectives for applications in scientific literature analysis, educational content delivery, and public policy discourse, where understanding knowledge formation processes is as important as the knowledge itself.
The unique advantages for the expression of human preferences in GRAPHYP’s cognitive communities are multifaceted and impactful as they provide an explicit, interpretable model of user behavior that enables multi-hop reasoning, dynamic adaptation, and cross-modal integration for a transparent decision-making process. These capabilities are critical for deploying personalized conversational agents that not only understand but also anticipate and explain their actions, thereby fostering robust, trustful, and effective human–AI interactions. Ongoing continuous research in hybrid graph-LLM systems—particularly the further development of cognitive communities and human-in-the-loop feedback mechanisms—promises to enhance these benefits even further, driving new applications and innovations in conversational AI across a broad range of domains [21].
As hybrid graph-LLM systems evolve, the integration of GRAPHYP’s cognitive communities in D-LLM will remain pivotal in achieving expressive and adaptive personalization. The explicit representation of user preferences through graph structures, coupled with the contextual generation capabilities of LLMs, provides a robust platform for understanding, predicting, and adapting to individual and communal human behaviors in real time. The resulting AI systems would not only be more intelligent and responsive but also more ethical and trustworthy—a critical step toward truly personalized and human-centered conversational interfaces: the last assertion underscores our essential motivation in presenting this D-LLM.
Another notable benefit is the ability to perform multi-hop and cross-modal reasoning. As users engage with conversational agents over extended periods, the accumulation of interactions results in complex, interlinked user profiles. GRAPHYP’s cognitive communities manage this complexity by storing and organizing preferences in a manner that is readily interpretable by the LLM. As a result, the system can “connect the dots” between seemingly disparate pieces of information. This capability is particularly important when addressing queries that require the synthesis of the multiple factors that may drive human preferences.

Author Contributions

Conceptualization, R.F.; formal analysis, R.F., D.E. and P.B.; methodology, R.F.; writing—original draft, R.F.; writing—review and editing, R.F., D.E. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

As this research focuses on Human-AI interaction tools, several generative AI systems (Claude.ai, Perplexity.ai) were queried with related prompts during the analysis phase to inform our understanding of current AI capabilities and behaviors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Perspective Illustration of D-LLM Dialogical Interaction Between GRAPHYP and a Large Language Model (LLM)

Appendix A.1. Overview

GRAPHYP’s cognitive community framework and adversarial information routes can be integrated with Large Language Models (LLMs) to create dynamic systems for dialogical knowledge exchange. By combining GRAPHYP’s manifold subnetworks with LLMs’ generative capabilities, this integration enables a nuanced exploration of contested knowledge in science and social networks.

Appendix A.2. Core Integration Mechanism

GRAPHYP models cognitive communities—groups of users with shared search behaviors and adversarial information paths. Integration with LLMs enables the following:
  • Assessor Shift Mapping: Using three key parameters: mass (volume of engagement), intensity (depth of topic-specific search), and variety (diversity of sources) [15].
  • Dialogical Simulation: LLMs generate multi-perspective responses using these parameters, reflecting competing viewpoints within subnetworks.
  • Dynamic Knowledge Exchange (e.g., IDVSCI [63]).
    LLMs propose hypotheses based on GRAPHYP’s adversarial routes.
    Communities respond via search behavior metrics.
    LLMs refine outputs using dual-diversity review (expert + adversarial evaluation) [63].

Appendix A.3. Scientific Use Cases

1. Climate Change Disputes
  • Scenario: GRAPHYP identifies two cognitive communities:
Community A: High Mass/Intensity; focused on anthropogenic models;
Community B: High Variety; focused on natural variability.
  • LLM Role:
Generates comparative reports citing sources from each subnetwork;
Highlights disputed metrics (e.g., temperature projections);
Triggers assessor shifts and proposes alternative exploration paths [17].
2. Genomics and Gene-Editing Controversies
  • Scenario: Debates over CRISPR ethics surface via distinct search routes.
  • LLM Role:
Simulates peer review dialogue;
Directs users to contrasting literature via GRAPHYP’s bipartite hypergraphs.

Appendix A.4. Social Network Applications

1. Political Polarization
  • GRAPHYP:
Detects polarized cliques (e.g., vaccine-skeptic vs. pro-vaccine);
Tracks shifts via content diversity.
  • LLM:
Offers personalized, bridging content across adversarial subnetworks.
2. Misinformation Detection
  • GRAPHYP:
Flags disputed claims through query anomaly detection (e.g., “5G health risks”).
  • LLM:
Generates counter-narratives through the following:
Collaborative filtering: Links users to trusted sources;
Personalized PageRank: Elevates high-centrality experts.

Appendix A.5. Implementation Requirements

Data Flow: Search logs → GRAPHYP subnetworks → LLM prompt engineering.
Evaluation: Dual-diversity review ensures balance between mainstream and adversarial perspectives [63].
This integration supports structured, real-time dialogues in scientific and social domains, advancing collective reasoning.

Appendix B. Assessor Shifts for Scientific Dispute Resolution

GRAPHYP perspective to resolve scientific disputes with LLMs: Cognitive communities—groups of experts or stakeholders—can leverage assessor shifts (changes in evaluative stance during debates) to resolve disputes with LLM support.

Appendix B.1. Capturing Assessor Shifts

  • Definition: Adjustments in how evidence or arguments are weighted in response to a new input.
  • Modeling: GRAPHYP tracks shifts via changes in search behavior, source citation, and argumentative structure.

Appendix B.2. Integrating LLMs for Arbitration

  • Conflict-Aware Reasoning: LLMs use frameworks like the Cognitive Alignment Framework to synthesize competing views through dual-process reasoning (heuristic + analytical).
Example: In peer review, the LLM extracts arguments, maps conflicts, and generates a consensus meta-review.
  • Bias Mitigation: LLMs can be trained to detect and counteract human biases (e.g., anchoring, conformity) for fairer outcomes.
  • Multi-Agent Collaboration: Frameworks like RECONCILE simulate dialogical reasoning among LLM “agents,” each representing distinct assessor positions. Through iterative voting, a reasoned consensus is formed.

Appendix B.3. Key Benefits

  • Transparency: Documents how community positions evolve.
  • Efficiency: Accelerates dispute resolution.
  • Bias Reduction: Counters human and model biases.
  • Scalability: Manages large-scale, multi-perspective debates beyond traditional peer review.
By combining assessor shifts with LLM frameworks, scientific disputes can be approached more systematically and equitably, fostering transparent and robust knowledge production.

References

  1. Carriero, V.A.; Azzini, A.; Baroni, I.; Scrocca, M.; Celino, I. Human Evaluation of Procedural Knowledge Graph Extraction from Text with Large Language Models. In Knowledge Engineering and Knowledge Management. EKAW 2024; Alam, M., Rospocher, M., van Erp, M., Hollink, L., Gesese, G.A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15370, pp. 434–452. [Google Scholar] [CrossRef]
  2. Zhao, J.; Zhuo, L.; Shen, Y.; Qu, M.; Liu, K.; Bronstein, M.; Zhu, Z.; Tang, J. Epistemic GraphText: Graph Reasoning in Text Space. arXiv 2023, arXiv:2310.01089. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Rossi, R.A.; Kveton, B.; Shao, Y.; Yang, D.; Zamani, H.; Dernoncourt, F.; Barrow, J.; Yu, T.; Kim, S.; et al. Personalization of Large Language Models: A Survey. arXiv 2025, arXiv:2411.00027. [Google Scholar] [CrossRef]
  4. Messeri, L.; Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature 2024, 627, 49–58. [Google Scholar] [CrossRef]
  5. Shojaee, P.; Mirzadeh, I.; Alizadeh, K.; Horton, M.; Bengio, S.; Farajtabar, M. The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. arXiv 2025, arXiv:2506.06941. [Google Scholar] [CrossRef]
  6. Shaikh, O.; Lam, M.S.; Hejna, J.; Shao, Y.; Cho, H.; Bernstein, M.S.; Yang, D. Aligning Language Models with Demonstrated Feedback. arXiv 2024, arXiv:2406.00888. [Google Scholar] [CrossRef]
  7. Chollet, F. On the Measure of Intelligence. arXiv 2019, arXiv:1911.01547. [Google Scholar] [CrossRef]
  8. Chen, J.; Liu, Z.; Huang, X.; Wu, C.; Liu, Q.; Jiang, G.; Pu, Y.; Lei, Y.; Chen, X.; Wang, X.; et al. When large language models meet personalization: Perspectives of challenges and opportunities. World Wide Web 2024, 27, 42. [Google Scholar] [CrossRef]
  9. Holzinger, A.; Saranti, A.; Hauschild, A.-C.; Beinecke, J.; Heider, D.; Roettger, R.; Mueller, H.; Baumbach, J.; Pfeifer, B. Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning. In Machine Learning and Knowledge Extraction. CD-MAKE 2023; Holzinger, A., Kieseberg, P., Cabitza, F., Campagner, A., Tjoa, A.M., Weippl, E., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14065, pp. 45–64. [Google Scholar] [CrossRef]
  10. Iga, V.I.R.; Silaghi, G.C. Assessing LLMs Suitability for Knowledge Graph Completion. In Neural-Symbolic Learning and Reasoning. NeSy 2024; Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2024; Volume 14980, pp. 277–290. [Google Scholar] [CrossRef]
  11. Qiu, Z.; Luo, L.; Pan, S.; Liew, A.W. Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation. arXiv 2024, arXiv:2411.14459. [Google Scholar] [CrossRef]
  12. Bai, Y.; Jones, A.; Ndousse, K.; Askell, A.; Chen, A.; Dassarma, N.; Drain, D.; Fort, S.; Ganguli, D.; Henighan, T.J.; et al. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. arXiv 2022, arXiv:2204.05862. [Google Scholar] [CrossRef]
  13. Zhu, S.; Sun, S. A short survey: Exploring knowledge graph-based neural-symbolic system from application perspective. arXiv 2024, arXiv:2405.03524. [Google Scholar] [CrossRef]
  14. Simeone, O. A Very Brief Introduction to Machine Learning with Applications to Communication Systems. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 648–664. [Google Scholar] [CrossRef]
  15. Fabre, R.; Azeroual, O.; Bellot, P.; Schöpfel, J.; Egret, D. Retrieving Adversarial Cliques in Cognitive Communities: A New Conceptual Framework for Scientific Knowledge Graphs. Future Internet 2022, 14, 262. [Google Scholar] [CrossRef]
  16. Fabre, R.; Azeroual, O.; Schöpfel, J.; Bellot, P.; Egret, D. A Multiverse Graph to Help Scientific Reasoning from Web Usage: Interpretable Patterns of Assessor Shifts in GRAPHYP. Future Internet 2023, 15, 147. [Google Scholar] [CrossRef]
  17. Fabre, R.; Bellot, P.; Egret, D. Challenging Scientific Categorizations Through Dispute Learning. Appl. Sci. 2025, 15, 2241. [Google Scholar] [CrossRef]
  18. Jiang, L.; Wu, X.; Huang, S.; Dong, Q.; Chi, Z.; Dong, L.; Zhang, X.; Lv, T.; Cui, L.; Wei, F. Think Only When You Need with Large Hybrid-Reasoning Models. arXiv 2025, arXiv:2505.14631. [Google Scholar] [CrossRef]
  19. O’Keefe, C.; Ramakrishnan, K.; Tay, J.; Winter, C. Law-Following AI: Designing AI Agents to Obey Human Laws. Fordham Law Rev. 2025, 94. Available online: https://law-ai.org/law-following-ai/ (accessed on 20 June 2025). [CrossRef]
  20. Peter, S.; Riemer, K.; West, J.D. The benefits and dangers of anthropomorphic conversational agents. Proc. Natl. Acad. Sci. USA 2025, 122, e2415898122. [Google Scholar] [CrossRef]
  21. Tan, Z.; Jiang, M. User Modeling in the Era of Large Language Models: Current Research and Future Directions. arXiv 2023, arXiv:2312.11518. [Google Scholar] [CrossRef]
  22. Kuusela, O. The Method of Language-Games as a Method of Logic. In Wittgenstein on Logic as the Method of Philosophy: Re-Examining the Roots and Development of Analytic Philosophy, Online ed.; Oxford Academic: Oxford, UK, 2019. [Google Scholar] [CrossRef]
  23. Clark, N.; Shen, H.; Howe, W.; Mitra, T. Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery. arXiv 2025, arXiv:2504.01205. [Google Scholar] [CrossRef]
  24. Jordan, M.I.; Ghahramani, Z.; Jaakkola, T.S.; Saul, L.K. An Introduction to Variational Methods for Graphical Models. In Learning in Graphical Models; Jordan, M.I., Ed.; NATO ASI Series; Springer: Dordrecht, The Netherlands, 1998; Volume 89, pp. 105–161. [Google Scholar] [CrossRef]
  25. Hill, R.; Yin, Y.; Stein, C.; Wang, X.; Wang, D.; Jones, B.F. The pivot penalty in research. Nature 2025, 642, 999–1006. [Google Scholar] [CrossRef] [PubMed]
  26. Jiang, P.; Rayan, J.; Dow, S.P.; Xia, H. Graphologue: Exploring Large Language Model Responses with Interactive Diagrams. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ‘23), San Francisco, CA, USA, 29 October–1 November 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–20. [Google Scholar] [CrossRef]
  27. Tang, C.; Zhang, H.; Loakman, T.; Lin, C.; Guerin, F. Enhancing Dialogue Generation via Dynamic Graph Knowledge Aggregation. arXiv 2023, arXiv:2306.16195. [Google Scholar] [CrossRef]
  28. Yang, Y.; Huang, H.; Gao, Y.; Li, J. Building knowledge-grounded dialogue systems with graph-based semantic modelling. Knowl.-Based Syst. 2024, 298, 11943. [Google Scholar] [CrossRef]
  29. Lin, Y.; Chen, Q.; Cheng, Y.; Zhang, J.; Liu, Y.; Hsia, L.; Chen, Y. LLM Inference Enhanced by External Knowledge: A Survey. arXiv 2025, arXiv:2505.24377. [Google Scholar] [CrossRef]
  30. Khorashadizadeh, H.; Zahra Amara, F.; Ezzabady, M.; Ieng, F.; Tiwari, S.; Mihindukulasooriya, N.; Groppe, J.; Sahri, S.; Benamara, F.; Groppe, S. Research Trends for the Interplay between Large Language Models and Knowledge Graphs. arXiv 2024, arXiv:2406.08223. [Google Scholar] [CrossRef]
  31. Kumar, R.; Ishan, K.; Kumar, H.; Singla, A. LLM-Powered Knowledge Graphs for Enterprise Intelligence and Analytics. arXiv 2025, arXiv:2503.07993. [Google Scholar] [CrossRef]
  32. Chen, N. Challenging Machines to the Language Game: Wittgensteinian Philosophy and Future Dimensions of Artificial Intelligence. Fudan J. Hum. Soc. Sci. 2015, 8, 487–500. [Google Scholar] [CrossRef]
  33. Pérez-Escobar, J.A.; Sarikaya, D. Philosophical Investigations into AI Alignment: A Wittgensteinian Framework. Philos. Technol. 2024, 37, 1–25. [Google Scholar] [CrossRef]
  34. Van Gennep, A. Essai d’une Théorie des Langues Spéciales. 1908. Available online: https://elianedaphy.org/IMG/pdf/VanGennep_1908_LanguesSpeciales.pdf (accessed on 20 June 2025).
  35. Markowitz, E.; Ramakrishna, A.; Dhamala, J.; Mehrabi, N.; Peris, C.; Gupta, R.; Chang, K.; Galstyan, A. Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Black-box Language Models with Knowledge Graphs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, 11–16 August 2024; Association for Computational Linguistics: Street Stroudsburg, PA, USA, 2024; pp. 12302–12319. Available online: https://aclanthology.org/2024.acl-long.665/ (accessed on 20 June 2025).
  36. Li, Q.; Huang, C.; Li, S.; Xiang, Y.; Xiong, D.; Lei, W. GraphOTTER: Evolving LLM-based Graph Reasoning for Complex Table Question Answering. In Proceedings of the 31st International Conference on Computational Linguistics, Abu Dhabi, United Arab Emirates, 19–24 January 2025; Association for Computational Linguistics: Street Stroudsburg, PA, USA, 2025; pp. 5486–5506. [Google Scholar]
  37. Cao, L. GraphReason: Enhancing Reasoning Capabilities of Large Language Models through A Graph-Based Verification Approach. In Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations, Bangkok, Thailand, 11–16 August 2024; Association for Computational Linguistics: Street Stroudsburg, PA, USA, 2024; pp. 1–12. Available online: https://aclanthology.org/2024.nlrse-1.1/ (accessed on 20 June 2025).
  38. Zhu, Z.; Huang, T.; Wang, K.; Ye, J.; Chen, X.; Luo, S. Graph-based Approaches and Functionalities in Retrieval-Augmented Generation: A Comprehensive Survey. arXiv 2025, arXiv:2504.10499. [Google Scholar] [CrossRef]
  39. Bougzime, O.; Jabbar, S.; Cruz, C.; Demoly, F. Unlocking the Potential of Generative AI through Neuro-Symbolic Architectures: Benefits and Limitations. arXiv 2025, arXiv:2502.11269. [Google Scholar] [CrossRef]
  40. Besta, M.; Memedi, F.; Zhang, Z.; Gerstenberger, R.; Piao, G.; Blach, N.; Nyczyk, P.; Copik, M.; Kwaśniewski, G.; Muller, J.; et al. Demystifying Chains, Trees, and Graphs of Thoughts. arXiv 2024, arXiv:2401.14295. [Google Scholar] [CrossRef]
  41. Zhu, Y.; Wang, X.; Chen, J.; Qiao, S.; Ou, Y.; Yao, Y.; Deng, S.; Chen, H.; Zhang, N. LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities. arXiv 2023, arXiv:2305.13168. [Google Scholar] [CrossRef]
  42. Ribeiro, L.C.; Bernardes, A.T.; Mello, H. On the fractal patterns of language structures. PLoS ONE 2023, 18, e0285630. [Google Scholar] [CrossRef]
  43. Alabdulmohsin, I.; Tran, V.Q.; Dehghani, M. Fractal Patterns May Illuminate the Success of Next-Token Prediction. arXiv 2024, arXiv:2402.01825. [Google Scholar] [CrossRef]
  44. Shen, X.; Wang, F.; Xia, R. Reason-Align-Respond: Aligning LLM Reasoning with Knowledge Graphs for KGQA. arXiv 2025, arXiv:2505.20971. [Google Scholar] [CrossRef]
  45. Tan, X.; Wang, X.; Liu, Q.; Xu, X.; Yuan, X.; Zhu, L.; Zhang, W. Hydra: Structured Cross-Source Enhanced Large Language Model Reasoning. arXiv 2025, arXiv:2505.17464. [Google Scholar] [CrossRef]
  46. Yuan, Y.; Liu, C.; Yuan, J.; Sun, G.; Li, S.; Zhang, M. A Hybrid RAG System with Comprehensive Enhancement on Complex Reasoning. arXiv 2024, arXiv:2408.05141. [Google Scholar] [CrossRef]
  47. Hare, R.; Patel, N.; Tang, Y.; Patel, P. A Graph-based Approach for Adaptive Serious Games. In Proceedings of the IEEE Intl Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Falerna, Italy, 12–15 September 2022; pp. 1–6. [Google Scholar] [CrossRef]
  48. Yang, M.; Wang, H.; Wei, Z.; Wang, S.; Wen, J.R. Efficient Algorithms for Personalized PageRank Computation: A Survey. In IEEE Trans. Knowl. Data Eng. 2024, 36, 4582–4602. [Google Scholar] [CrossRef]
  49. Trevisan, A. The BS-meter: A ChatGPT-Trained Instrument to Detect Sloppy Language-Games. arXiv 2024, arXiv:2411.15129. [Google Scholar] [CrossRef]
  50. Coppolillo, E. Injecting Knowledge Graphs into Large Language Models. arXiv 2025, arXiv:2505.07554. [Google Scholar] [CrossRef]
  51. Leandro, J.; Rao, S.; Xu, M.; Xu, W.; Jojic, N.; Brockett, C.; Dolan, B. GENEVA: GENErating and Visualizing branching narratives using LLMs. In Proceedings of the 2024 IEEE Conference on Games (CoG), Milan, Italy, 5–8 August 2024; pp. 1–5. [Google Scholar] [CrossRef]
  52. Au, S.; Dimacali, C.J.; Pedirappagari, O.; Park, N.; Dernoncourt, F.; Wang, Y.; Kanakaris, N.; Deilamsalehy, H.; Rossi, R.A.; Ahmed, N.K. Personalized Graph-Based Retrieval for Large Language Models. arXiv 2025, arXiv:2501.02157. [Google Scholar] [CrossRef]
  53. Gunawan, A.; Ruan, J.; Huang, X. A Graph Neural Network Reasoner for Game Description Language. In Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning—Special Session on KR and Machine Learning, Haifa, Israel, 31 July–5 August 2022; pp. 443–452. [Google Scholar]
  54. Pan, S.; Zheng, Y.; Liu, Y. Integrating Graphs with Large Language Models: Methods and Prospects. IEEE Intell. Syst. 2024, 39, 64–68. [Google Scholar] [CrossRef]
  55. Zhang, M.; Sun, M.; Wang, P.; Fan, S.; Mo, Y.; Xu, X.; Liu, H.; Yang, C.; Shi, C. GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks. In Proceedings of the ACM Web Conference 2024 (WWW ‘24), Singapore, 13–17 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1003–1014. [Google Scholar] [CrossRef]
  56. Chen, S.; Zhao, Z.; Zhao, Y.; Li, X. Apollonion: Profile-centric Dialog Agent. arXiv 2024, arXiv:2404.08692. [Google Scholar] [CrossRef]
  57. Wu, S.; Fung, M.; Qian, C.; Kim, J.; Hakkani-Tur, D.; Ji, H. Aligning LLMs with Individual Preferences via Interaction. arXiv 2024, arXiv:2410.03642. [Google Scholar] [CrossRef]
  58. Kau, A.; He, X.; Nambissan, A.; Astudillo, A.; Yin, H.; Aryani, A. Combining Knowledge Graphs and Large Language Models. arXiv 2024, arXiv:2407.06564. [Google Scholar] [CrossRef]
  59. Guo, Y.; Guo, M.; Su, J.; Yang, Z.; Zhu, M.; Li, H.; Qiu, M.; Liu, S.S. Bias in Large Language Models: Origin, Evaluation, and Mitigation. arXiv 2024, arXiv:2411.10915. [Google Scholar] [CrossRef]
  60. Ibáñez, L.-D.; Domingue, J.; Kirrane, S.; Seneviratne, O.; Third, A.; Vidal, M.-E. Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination. arXiv 2023, arXiv:2310.19503. [Google Scholar] [CrossRef]
  61. Zhao, X.; Blum, M.; Yang, R.; Yang, B.; Márquez Carpintero, L.; Pina-Navarro, M.; Wang, T.; Li, X.; Li, H.; Fu, Y.; et al. AGENTiGraph: An Interactive Knowledge Graph Platform for LLM-based Chatbots Utilizing Private Data. arXiv 2024, arXiv:2410.11531. [Google Scholar] [CrossRef]
  62. Garello, L.; Belgiovine, G.; Russo, G.; Rea, F.; Sciutti, A. Building Knowledge from Interactions: An LLM-Based Architecture for Adaptive Tutoring and Social Reasoning. arXiv 2025, arXiv:2504.01588. [Google Scholar] [CrossRef]
  63. Yu, W.; Tang, S.; Huang, Y.; Dong, N.; Fan, L.; Qi, H.; Liu, W.; Diao, X.; Chen, X.; Ouyang, W. Dynamic Knowledge Exchange and Dual-diversity Review: Concisely Unleashing the Potential of a Multi-Agent Research Team. arXiv 2025, arXiv:2506.18348. [Google Scholar] [CrossRef]
Figure 1. Perspective on building D-LLMs: Process, tools, results.
Figure 1. Perspective on building D-LLMs: Process, tools, results.
Applsci 15 08307 g001
Table 1. Core coupling capabilities.
Table 1. Core coupling capabilities.
CapabilityHow Coupling Enables It
Interactive ReasoningLLM guides GRAPHYP through stepwise graph traversal
Dynamic DialogueMaintains context, supports clarifications and follow-ups
ExplainabilityExplicit reasoning traces, transparent multi-step logic
Data IntegrationHandles structured, unstructured, and time-series data
Table 2. Key fractal reasoning capabilities.
Table 2. Key fractal reasoning capabilities.
CapabilityGRAPHYP (Fractal Geometry)LLM Coupling Benefit
Self-similarity analysisQuantifies repeating patternsReveals semantic clusters
Scale invarianceMeasures complexityDetects emergent structures
Dynamic metricsTracks changes in structureAdaptive reasoning support
Table 3. Reasoning capability comparison: D-LLM perspective.
Table 3. Reasoning capability comparison: D-LLM perspective.
CapabilityLLMGRAPHYPHybrid (GRAPHYP + LLM)
Pattern RecognitionStrongWeakStrong
Logical ReasoningLimitedStrongStrong
Multi-step InferenceWeakStrongEnhanced flexibility
ExplainabilityLowHighHigh
Uncertainty HandlingWeakStrong (with hybrid)Strongest
Scalability/AdaptabilityHighModerateHigh
Fractal AnalysisNoneStrongEnhanced with semantic integration
Real-time LearningModerateLimitedStrong
Context PreservationModerateStrongStrongest
Hallucination RateHighLow (limited scope)Reduced
Multi-hop Query PerformanceWeakStrongSuperior
Factual ConsistencyModerateHigh (within domain)Enhanced across domains
Uncertainty QuantificationPoorGood (possibilistic/probabilistic)Excellent (dual framework)
Table 4. Differential personalization components.
Table 4. Differential personalization components.
ComponentImplementationBenefit
Preference analysisGNN message passing across interaction graphIdentifies latent skill patterns
Content retrievalAttention-based neighborhood samplingFinds relevant challenges
Progress predictionGraph traversal algorithmsAnticipates learning trajectories
Table 5. PPR sampling benefits.
Table 5. PPR sampling benefits.
BenefitHow It Works in GRAPHYP
Relevance-Focused PersonalizationSamples elements using user-specific PPR scores
Real-Time AdaptationUpdates context as user position evolves
Scalable EfficiencyProcesses only the most important nodes per user
Unique PathwaysRanks options from personal user perspectives
Table 6. Comparison of preference expression capabilities.
Table 6. Comparison of preference expression capabilities.
FeatureLLM OnlyGRAPHYP + LLM Coupling
Knowledge RepresentationUnstructured/TextualStructured/Graph-based
Preference ModelingStatistical, opaqueTransparent, explainable
Preference ElicitationLanguage-based, staticInteractive, dynamic
Reasoning CapabilitiesLanguage-based inferenceGraph-augmented reasoning
User Choice FreedomLimited by prompt constraintsEnhanced by structured reasoning and LLM flexibility
Table 7. D-LLM comprehensive advantages.
Table 7. D-LLM comprehensive advantages.
Feature/AdvantageGRAPHYP + LLM IntegrationTraditional LLMStandard Knowledge Graph (KG)
InterpretabilityHigh (reasoning paths, assessor shifts)Medium (textual explanations)High (explicit relations, limited reasoning paths)
PersonalizationReal-time, user-specificLimitedPossible, but not real-time
Dispute/Controversy ModelingNative support (dispute learning)WeakWeak
Multi-hop ReasoningStrong (graph traversal + LLM)WeakStrong (but less flexible)
Real-time Knowledge UpdatesEfficient (no retraining)Slow (needs retraining)Moderate (manual updates)
Bridging Text and StructureYes (symbolic/textual conversion)NoNo
Discovery/SerendipityHigh (exposes alternative paths)LowLow
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fabre, R.; Egret, D.; Bellot, P. Matching Game Preferences Through Dialogical Large Language Models: A Perspective. Appl. Sci. 2025, 15, 8307. https://doi.org/10.3390/app15158307

AMA Style

Fabre R, Egret D, Bellot P. Matching Game Preferences Through Dialogical Large Language Models: A Perspective. Applied Sciences. 2025; 15(15):8307. https://doi.org/10.3390/app15158307

Chicago/Turabian Style

Fabre, Renaud, Daniel Egret, and Patrice Bellot. 2025. "Matching Game Preferences Through Dialogical Large Language Models: A Perspective" Applied Sciences 15, no. 15: 8307. https://doi.org/10.3390/app15158307

APA Style

Fabre, R., Egret, D., & Bellot, P. (2025). Matching Game Preferences Through Dialogical Large Language Models: A Perspective. Applied Sciences, 15(15), 8307. https://doi.org/10.3390/app15158307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop