Background: Effective communication with patients is vital for improving health outcomes in chronic disease management. In this study, we investigated WoundScribeAI’s Scribe AI, also known as Ambient Technology, and its patient education and engagement app, Pingoo.AI. It employed a multi-agent AI model
[...] Read more.
Background: Effective communication with patients is vital for improving health outcomes in chronic disease management. In this study, we investigated WoundScribeAI’s Scribe AI, also known as Ambient Technology, and its patient education and engagement app, Pingoo.AI. It employed a multi-agent AI model that leveraged Large Language Models (LLMs) and NotebookLM to enhance patient communication in clinical settings.
Methods: The system comprised specialized agents that transcribed healthcare provider–patient conversations through ambient dictation. This transcription generated medical notes that followed the Subjective, Objective, Assessment, and Plan (SOAP) format—a structured document used by healthcare providers to record and communicate information about patient encounters. Simultaneously, comprehensive visit summaries were also created. In the next step, these visit summaries were used to produce conversational and educational content by leveraging NotebookLM, an AI model introduced by Google that can generate podcast-style conversations from provided information. Integrating these agents allows clinicians to deliver engaging, empathetic, and actionable information to patients. Medical experts conducted a two-phase evaluation of the system’s performance based on multiple criteria, with a particular focus on diabetes education and diabetic foot care. The first phase used pre-recorded training videos, while the second phase involved simulated consultations by clinicians using the system. To validate the AI-generated educational content, we used several established frameworks in health communication that closely align with our enhancement goals.
Results: The results showed that the AI model generated accurate clinical documentation and met the criteria for accurate SOAP Notes, visit summaries, and engaging educational content for patients. Given that hallucination is a significant concern related to large language models, especially in critical fields like healthcare, we meticulously analyzed the generated outputs to identify any signs of hallucinated information. Three outcomes successfully passed the validation criteria, including accuracy, completeness, comprehensiveness, absence of potential harm, and no hallucination. Additionally, the Conversational Education content was confirmed against established patient education frameworks and met criteria such as the use of metaphors, empathetic tone, and appropriate language, providing additional detail to help manage the condition.
Conclusions: By providing specific instructions and prompts to NotebookLM to transform visit summaries into educational conversations, we significantly enhanced the comprehensiveness and engagement of the content for patients. In contrast to a traditional summary of the clinical visit, the podcast-style conversation enriched the content with background information, encouraging language, an empathetic tone, and helpful metaphors. Our analysis confirmed that the system did not exhibit any hallucinations, highlighting the effectiveness of our approach in mitigating this risk. These findings support the use of multi-agent AI models, combined with ambient dictation and tools like NotebookLM, to improve patient communication that surpasses traditional paper-based brochures, which are often impersonal, minimal, and do not always adhere to recommended factors for health literacy.
Full article