Next Article in Journal
Comparison of the Effects of Mucosa Tissue Healing with Chlorhexidine Digluconate and Choline Salicylate in Patients Wearing a Removable Prosthetic Restoration—A RCT
Previous Article in Journal
Minimal Clinically Important Differences in the Cancer Quality of Life Questionnaires in Patients with Head and Neck Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

Readability Metrics in Patient Education: Where Do We Innovate?

1
Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
2
Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
3
Department of Nephrology and Hypertension, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA
*
Author to whom correspondence should be addressed.
Clin. Pract. 2024, 14(6), 2341-2349; https://doi.org/10.3390/clinpract14060183
Submission received: 13 August 2024 / Revised: 9 October 2024 / Accepted: 18 October 2024 / Published: 4 November 2024

Abstract

:
The increasing use of digital applications in healthcare has led to a greater need for patient education materials. These materials, often in the form of pamphlets, booklets, and handouts, are designed to supplement physician–patient communication and aim to improve patient outcomes. However, the effectiveness of these materials can be hindered by variations in patient health literacy. Readability, a measure of text comprehension, is a key factor influencing how well patients understand these educational materials. While there has been growing interest in readability assessment in medicine, many studies have demonstrated that digital texts do not frequently meet the recommended sixth-to-eighth grade reading level. The purpose of this opinion article is to review readability from the perspective of studies in pediatric medicine, internal medicine, preventative medicine, and surgery. This article aims to communicate that while readability is important, it tends to not fully capture the complexity of health literacy or effective patient communication. Moreover, a promising avenue to improve readability may be in generative artificial intelligence, as there are currently limited tools with similar effectiveness.

1. Introduction

A tremendous rise in the use of digital applications in healthcare has marked this past decade. For clinicians, this includes growing innovation in the electronic medical record, which nuances clinical documentation and information dissemination [1,2]. Likewise, for patients, there has been increasing innovation in using digital access to medical information through provider–patient portals and the convenient dissemination of educational materials on patient ailments in both digital and non-digital modalities [3]. The latter educational materials are a critical component within the infrastructure of inpatient and outpatient patient education programs. Specifically, patients receive an array of pamphlets, booklets, and paper handouts that aim to educate them regarding their diseases and critical management points [4]. These aim to serve as supplementary sources of information for patients in addition to the patient’s physician, and they serve a significant role during situations where physician–patient communication is inadequate or there are important time constraints for physicians and healthcare professionals to educate a patient in the acute setting [4]. Educational materials are reported to contribute to positive patient outcomes and patient perception regarding their care [5]. However, these educational materials are static due to the varying levels of health literacy among the patients to whom they are provided. Patients with below-average health literacy have been shown to have worse hospitalization outcomes and spend more on prescriptions and healthcare utilization than those with higher health literacy, indicating an inverse relationship between health literacy and healthcare utilization [6,7].

Readability

Clinicians and researchers have explored the behaviors of patient health literacy, and within this concept is the comprehension quality of texts provided in the education materials to patients. While there are numerous contributors to the quality of text comprehension, readability is a prominent concept that the literature has often assumed to be associated with comprehension within this area [8,9]. Historically, readability was defined by George Klare as “the ease of understanding or comprehension due to the style of writing” [10]. When it comes to quantifying readability, it was the mid-19th century when the United States developed texts to be associated with grade-reading levels for students, and it has become a relatively common metric used by educators to assess student reading levels in the United States education system [9]. Eventually, the readability scoring will also become more frequent among adult civilians. These readability metrics were used in medicine, and the body of literature experienced tremendous growth in the 21st century, as demonstrated in Figure 1 [11,12,13].
The current readability metrics used to score texts are many, and DuBray reports that there may be over 200 readability formulas published [9]. In the current body of literature, many have often been used in medical education. The simple measure of gobbledygook (SMOG) score measure is a prominent readability formula with a reported high utility in consumer healthcare literature compared to other measures [14,15,16]. Wang et al. reported greater consistency between scores using the SMOG readability formula compared to other metrics [14]. However, these findings may be variable, as literature that has comparable readability metrics with a smaller area of medicine, such as a disease topic, has demonstrated no significant difference between readability metrics when compared to the SMOG [17,18]. It is, therefore, imperative to review the most prominent readability metrics studied in the literature. The Flesch Reading Ease Formula has also been prominently used in the clinical literature. It uses the number of sentences, syllables, and words within a scale of 1 to 100, where the higher the score, the greater the ease of reading the text sample is [19,20]. The Flesch Reading Ease score is one of the few unique metrics with this scale, as other metrics have been scored in a way that numerically correlates with a grade reading level. For example, the Flesch–Kincaid score factors in similar variables but can scale the score to approximate the grade reading level where an approximate score of 11.2 would suggest that the text is written at the level of an 11th-grade student. The Gunning Fog readability metric also uses a similar scaling but also considers words with three syllables or more as a variable. This grouping of words is considered “difficult” or “complex,” according to the total number of words with ≥3 syllables [9,21,22]. The New Dale–Chall readability metric employs a preselected list of words to categorize as another variable for words considered familiar and, therefore, calculated differently from words not included on this list [23].
As demonstrated in Figure 1, the body of literature on readability and patient education has accelerated over time across numerous aspects of patient communication. These metrics have demonstrated that research consent forms often need to meet the recommended sixth-to-eighth-grade reading level of texts [24]. These findings have been consistent across educational materials of interest, including consent forms, discharge paperwork, and online educational readings. Frequently, the published literature will suggest that patients’ education material must improve to make it easier for individuals to read, as the risk of literature with poor readability may indicate that patients will be more likely to comprehend information incorrectly, leading to misinformation [25,26,27]. The goal of this assertion is to explore the key components of literature that focuses on the readability of patient education materials (Figure 2).

2. Readability and Patient Education in Adult Internal Medicine

Online educational material on common inpatient admission diagnoses was also shown to have increased grade reading levels. Specifically, Hansberry et al. showed that resources had a mean readability over the 10th grade for conditions including pancreatitis, pulmonary embolism, and diverticulitis [28]. For individuals in geriatric care, the study by Weiss et al. primarily found that while the Health-in-Aging website aimed for simple language, its content was written at about a 10th-grade level [29]. Regarding patients in intensive and critical care settings, Hanci et al. reported that the mean Flesch reading ease score of online patient education materials from intensivist societies was over 50, indicating the text was “difficult to read” [15]. Still, no study has shown significant improvement to meet recommended grade reading levels. As a subset within internal medicine, educating patients with oncologic disease is imperative, as patients who undergo immunotherapy and radiation therapy require a great degree of longitudinal care and monitoring. The ability to maintain this longitudinal monitoring requires clinicians and oncology team members to educate patients on their condition appropriately. Moreover, Papadakos et al. report that oncology centers invest over USD 60,000 annually in developing educational pamphlets for patients alone [30]. Recent literature has demonstrated the use of large language models to produce educational texts on oncologic diseases [22,31,32]. These models have also been shown not to meet recommended reading levels. Similarly to literature in surgery, these generative artificial intelligence models have been able to improve reading levels. However, the improvement of these models was still unable to meet recommended grade reading levels, as they decreased text reading ease to the ninth grade. Likewise, online education materials for adult endocrine diseases were reported to have a range of Flesch Reading Ease scores ranging between “fairly difficult to read” and “very difficult to read” [21]. This poor readability was similarly demonstrated in numerous studies regarding rheumatologic condition as well, exceeding the eighth-grade reading level [11,16].

3. Readability and Patient Education in Pediatric Medicine

The relationship between pediatric medical teams and their patients is unique compared to other populations. Patient education will require communication with both the parents at an appropriate level and the patients, given their age [33]. Similarly, readability metrics on patient education materials are often found to be too advanced for patients and have even been shown to be a barrier to the enrollment of patients in research studies. However, a study in 2005 showed that some education brochures by the American Academy of Pediatrics were acceptably low enough in grade reading level [34]. There is a relative paucity of more recent, longitudinal results. These are essential given the tremendous growth of online literature since the publication of this study. Okuhara et al. published a systematic review of the readability of online and offline vaccine informational materials [35]. Some of the extracted studies on childhood vaccinations reported both a poor reading status by the parents of the patients and poor reading materials.

4. Readability and Patient Education in Preventative Medicine

There is a growing interest in preventative healthcare among individuals as chronic diseases remain a primary catalyst in healthcare utilization costs [36]. Despite this, there is a stasis in readability for many preventative measures. For example, coronary artery calcium scans were demonstrated in a 2020 study to have an overall mean readability score of 10.9 for public online educational materials [37]. However, a 2023 study demonstrated a mean Flesh–Kincaid score of 11.3, which may indicate no change or worse readability over this period [38]. A study by Skrzypczak and Mamak observed that the readability of websites with information regarding colonoscopy was classified as “very hard to comprehend” [39]. The mean Flesch–Kincaid score for lung cancer screening is over the recommended reading levels, and approximately over the 10th-grade reading level [40]. This above-recommended level was also found in breast-cancer-screening educational materials [41]. Gu et al. reported a systematic review and meta-analysis of online patient education materials on breast cancer and found inconsistently reported readability metrics in this body of literature [42]. However, most of this analysis did not meet recommended grade reading levels. AlKhalili et al. showed that 42 internet-based education materials on mammography and breast-cancer screening had a mean Gunning Fog score of over 14, indicating a grade reading level nearly twice as high as the recommended levels [43]. Parry et al. reported a mean readability of 9.5 for the readability of online articles regarding pap smears [44]. For common medications, Ngo et al. found that online educational materials on statins have a mean readability score over 10 [45].

5. Readability and Patient Education in Surgery

Within surgery, informed consent plays one of the most vital roles in patient care. Physicians and surgical-team members must assess a patient’s health literacy regarding their disease and management options [46]. Lin et al. reported a population of procedure consent forms from 104 hospitals in the United States. This study discovered that the mean grade reading level of these procedure consent forms was 11th grade and did not meet the recommended grade reading levels [47]. Peer-reviewed articles have observed readability metrics using digital and internet-based articles regarding numerous surgical aspects (i.e., neurological surgery, gastrointestinal surgery, plastic surgery, sinus surgery, and trauma surgery) [48,49,50,51,52,53,54]. These studies have reported poor readability levels of the educational materials employed. Furthermore, the readability of content from national organizations and high-impact articles showed no difference in meeting recommended grade reading levels [11,50]. Moreover, an article by Zhang et al. reports that there has been limited improvement in the readability of online patient educational materials regarding hand surgery in 2008 and 2015 [49]. The more recent body of literature regarding surgical patient education and readability since 2020 has included observing the behavior of readability in the setting of generative artificial intelligence [32]. An article by Ali et al. demonstrated the ability to use a generative pre-trained transformer (GPT) to improve the readability of surgical consent forms [55]. Moreover, this study showed that post-GPT, readability improved from the college level to the eighth-grade reading level.

6. Discussion

Readability metrics may not give a complete picture of comprehension. All sections reported in this review demonstrated limited mean readability levels that do not meet the recommended reading level between sixth and eighth grades. Rather, readability may be a much more nuanced, personalized score that cannot be entirely encompassed by a grade reading level. Moreover, scores such as the Flesch Reading Ease and Flesch–Kincaid do not account for the subjective matter of which words are familiar to readers. This reading familiarity may not be weighted similarly to unfamiliar words [56]. Although the Dale–Chall and New Dale–Chall metrics aim to provide a word list to account for familiarity, these metrics may also not account for words that may be more familiar for patients within topic subsets [23,57]. This creates an opportunity for clinicians and researchers in readability to focus on developing a metric that also accounts for readable familiarity specific to a subject topic. For example, patients undergoing dialysis may have a greater familiarity with words regarding their treatment compared to patients with no reported history of renal diseases.
The concern for readability criteria among many cross-sectional studies has been reported in prior literature. However, there seems to be a paucity in the lack of transparent ways to improve the readability of text given the cross-sectional design of many of these studies. Hackos and Stevens describe a number of ways to improve texts that may be able to be employed, including avoiding jargon, using the present tense, and using culture- and gender-neutral languages [58]. Moreover, researchers ought to investigate the ability to design innovative tools that can review texts and suggest these readability simplification factors. One of these tools may be in generative artificial intelligence given the demonstrated reduction in readability scores.
Generative artificial intelligence models demonstrated an ability to decrease readability scores when applied to these models [22,59,60]. This is a promising avenue to improve readability, given that the longitudinal improvements across numerous online medical contents have not demonstrated statistically significant improvement to meet recommended grade reading levels over time. Although grade reading level recommendations were not met by many studies that implemented generative artificial intelligence, it is vital to understand that the literature on using these models continues to grow in the literature. The models will eventually utilize their deep learning attributes to adapt readability. However, it remains imperative that these models are monitored longitudinally over time, as deep learning may also result in artificial intelligence hallucination, impairing our understanding of proper health literacy and readability.
This opinion article is limited in its scope where it focused on four major avenues of medical care: surgery, adult medicine, pediatric medicine, and preventative medicine. Likewise, not all subspecialties or medical topics were discussed. However, the depth of readability studies all demonstrated the similar finding that texts do not meet recommended grade reading levels. Instead, this study provides an updated status on the current state of the literature, indicating that promising avenues to improve readability may be found in the use of generative artificial intelligence models. These models may be used across numerous aspects of medicine. The decision to design this opinion article around these four areas in medicine is because they cover common areas of outpatient medical care that patients obtain. Likewise, this review also notes the need for more investigation on readability in pediatric medicine, as there is a lack of literature in that area compared to internal medicine and surgical fields.

7. Conclusions

Many studies report similar findings showing that their samples of readable texts do not meet the recommended sixth-to-eighth grade reading level. A possible area to explore may be understanding that readability alone may not be a metric appropriate to determine health literacy or patient communication. Instead, it demonstrates that texts urgently need to improve. One promising avenue for improvement may be in training generative artificial intelligence, as there is a lack of tools in the literature that have shown a similar degree of readability improvement. Another possible avenue may be to improve readability scoring using metrics that employ updated or more personalized strategies, like the New Dale–Chall formula, which has an updated list of familiar words that are weighted as a different variable. This may help improve our understanding of readability at a more personalized level.

Author Contributions

Conceptualization, S.S. and F.Q.; investigation, S.S. and A.J.; resources, F.Q.; data curation, A.J.; writing—original draft preparation, S.S.; writing—review and editing, A.J.; visualization, A.J.; supervision, F.Q.; project administration, F.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Honavar, S.G. Electronic Medical Records—The Good, the Bad and the Ugly. Indian J. Ophthalmol. 2020, 68, 417–418. [Google Scholar] [CrossRef] [PubMed]
  2. Evans, R.S. Electronic Health Records: Then, Now, and in the Future. Yearb. Med. Inform. 2016, 25, S48–S61. [Google Scholar] [CrossRef] [PubMed]
  3. Carini, E.; Villani, L.; Pezzullo, A.M.; Gentili, A.; Barbara, A.; Ricciardi, W.; Boccia, S. The Impact of Digital Patient Portals on Health Outcomes, System Efficiency, and Patient Attitudes: Updated Systematic Literature Review. J. Med. Internet Res. 2021, 23, e26189. [Google Scholar] [CrossRef] [PubMed]
  4. Shank, J.C.; Murphy, M.; Schulte-Mowry, L. Patient Preferences Regarding Educational Pamphlets in the Family Practice Center. Fam. Med. 1991, 23, 429–432. [Google Scholar] [PubMed]
  5. Giguère, A.; Zomahoun, H.T.V.; Carmichael, P.-H.; Uwizeye, C.B.; Légaré, F.; Grimshaw, J.M.; Gagnon, M.-P.; Auguste, D.U.; Massougbodji, J. Printed Educational Materials: Effects on Professional Practice and Healthcare Outcomes. Cochrane Database Syst. Rev. 2020, 31, CD004398. [Google Scholar] [CrossRef]
  6. Shahid, R.; Shoker, M.; Chu, L.M.; Frehlick, R.; Ward, H.; Pahwa, P. Impact of Low Health Literacy on Patients’ Health Outcomes: A Multicenter Cohort Study. BMC Health Serv. Res. 2022, 22, 1148. [Google Scholar] [CrossRef]
  7. Rasu, R.S.; Bawa, W.A.; Suminski, R.; Snella, K.; Warady, B. Health Literacy Impact on National Healthcare Utilization and Expenditure. Int. J. Health Policy Manag. 2015, 4, 747–755. [Google Scholar] [CrossRef]
  8. Pickren, S.E.; Stacy, M.; Del Tufo, S.N.; Spencer, M.; Cutting, L.E. The Contribution of Text Characteristics to Reading Comprehension: Investigating the Influence of Text Emotionality. Read. Res. Q. 2022, 57, 649–667. [Google Scholar] [CrossRef]
  9. DuBay, W.H. The Principles of Readability; Online Submiss; ERIC: Washington, DC, USA, 2004. [Google Scholar]
  10. Klare, G.R. Measurement of Readability; Wageningen University and Research Library Catalog: Wageningen, The Netherlands, 1963; Available online: https://library.wur.nl/WebQuery/titel/495567 (accessed on 10 August 2024).
  11. Rooney, M.K.; Santiago, G.; Perni, S.; Horowitz, D.P.; McCall, A.R.; Einstein, A.J.; Jagsi, R.; Golden, D.W. Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis. J. Patient Exp. 2021, 8, 2374373521998847. [Google Scholar] [CrossRef]
  12. Noblin, A.M.; Zraick, R.I.; Miller, A.N.; Schmidt-Owens, M.; Deichen, M.; Tran, K.; Patel, R. Readability and Suitability of Information Presented on a University Health Center Website. Perspect. Health Inf. Manag. 2022, 19, 1f. [Google Scholar]
  13. Li, J.; Lin, F.; Duan, T. Exploring Two Decades of Research on Online Reading by Using Bibliometric Analysis. Educ. Inf. Technol. 2024, 29, 12831–12862. [Google Scholar] [CrossRef]
  14. Wang, L.-W.; Miller, M.J.; Schmitt, M.R.; Wen, F.K. Assessing Readability Formula Differences with Written Health Information Materials: Application, Results, and Recommendations. Res. Soc. Adm. Pharm. RSAP 2013, 9, 503–516. [Google Scholar] [CrossRef] [PubMed]
  15. Hanci, V.; Otlu, B.; Biyikoğlu, A.S. Assessment of the Readability of the Online Patient Education Materials of Intensive and Critical Care Societies. Crit. Care Med. 2024, 52, e47–e57. [Google Scholar] [CrossRef] [PubMed]
  16. Oliffe, M.; Thompson, E.; Johnston, J.; Freeman, D.; Bagga, H.; Wong, P.K.K. Assessing the Readability and Patient Comprehension of Rheumatology Medicine Information Sheets: A Cross-Sectional Health Literacy Study. BMJ Open 2019, 9, e024582. [Google Scholar] [CrossRef] [PubMed]
  17. Kher, A.; Johnson, S.; Griffith, R. Readability Assessment of Online Patient Education Material on Congestive Heart Failure. Adv. Prev. Med. 2017, 2017, 9780317. [Google Scholar] [CrossRef] [PubMed]
  18. Mac, O.; Ayre, J.; Bell, K.; McCaffery, K.; Muscat, D.M. Comparison of Readability Scores for Written Health Information Across Formulas Using Automated vs Manual Measures. JAMA Netw. Open 2022, 5, e2246051. [Google Scholar] [CrossRef]
  19. Jindal, P.; MacDermid, J.C. Assessing Reading Levels of Health Information: Uses and Limitations of Flesch Formula. Educ. Health 2017, 30, 84. [Google Scholar] [CrossRef]
  20. Wrigley Kelly, N.E.; Murray, K.E.; McCarthy, C.; O’Shea, D.B. An Objective Analysis of Quality and Readability of Online Information on COVID-19. Health Technol. 2021, 11, 1093–1099. [Google Scholar] [CrossRef]
  21. Singh, S.P.; Qureshi, F.M.; Borthwick, K.G.; Singh, S.; Menon, S.; Barthel, B. Comprehension Profile of Patient Education Materials in Endocrine Care. Kans. J. Med. 2022, 15, 247–252. [Google Scholar] [CrossRef]
  22. Singh, S.P.; Jamal, A.; Qureshi, F.; Zaidi, R.; Qureshi, F. Leveraging Generative Artificial Intelligence Models in Patient Education on Inferior Vena Cava Filters. Clin. Pract. 2024, 14, 1507–1514. [Google Scholar] [CrossRef]
  23. Chall, J.S.; Dale, E. Readability Revisited: The New Dale-Chall Readability Formula; CiNii Research; Brookline Books: Brookline, MA, USA, 1995; Available online: https://cir.nii.ac.jp/crid/1130282268845043712 (accessed on 10 August 2024).
  24. Perni, S.; Rooney, M.K.; Horowitz, D.P.; Golden, D.W.; McCall, A.R.; Einstein, A.J.; Jagsi, R. Assessment of Use, Specificity, and Readability of Written Clinical Informed Consent Forms for Patients With Cancer Undergoing Radiotherapy. JAMA Oncol. 2019, 5, e190260. [Google Scholar] [CrossRef] [PubMed]
  25. Battineni, G.; Baldoni, S.; Chintalapudi, N.; Sagaro, G.G.; Pallotta, G.; Nittari, G.; Amenta, F. Factors Affecting the Quality and Reliability of Online Health Information. Digit. Health 2020, 6, 2055207620948996. [Google Scholar] [CrossRef] [PubMed]
  26. Crabtree, L.; Lee, E. Assessment of the Readability and Quality of Online Patient Education Materials for the Medical Treatment of Open-Angle Glaucoma. BMJ Open Ophthalmol. 2022, 7, e000966. [Google Scholar] [CrossRef] [PubMed]
  27. Boroumand, M.A.; Sedghi, S.; Adibi, P.; Panahi, S.; Rahimi, A. Patients’ Perspectives on the Quality of Online Patient Education Materials: A Qualitative Study. J. Educ. Health Promot. 2022, 11, 402. [Google Scholar] [CrossRef] [PubMed]
  28. Hansberry, D.R.; D’Angelo, M.; White, M.D.; Prabhu, A.V.; Cox, M.; Agarwal, N.; Deshmukh, S. Quantitative Analysis of the Level of Readability of Online Emergency Radiology-Based Patient Education Resources. Emerg. Radiol. 2018, 25, 147–152. [Google Scholar] [CrossRef]
  29. Weiss, B.D.; Mollon, L.; Lee, J.K. Readability of Patient Education Information on the American Geriatrics Society Foundation’s Health-in-Aging Website. J. Am. Geriatr. Soc. 2013, 61, 1845–1846. [Google Scholar] [CrossRef]
  30. Papadakos, J.; Samoil, D.; Giannopoulos, E.; Jain, P.; McBain, S.; Mittmann, N.; Papadakos, T.; Fox, C.; Moody, L.; McLeod, R. The Cost of Patient Education Materials Development: Opportunities to Identify Value and Priorities. J. Cancer Educ. Off. J. Am. Assoc. Cancer Educ. 2022, 37, 834–842. [Google Scholar] [CrossRef]
  31. Cureus. Can Generative Artificial Intelligence Enhance Health Literacy About Lateral Epicondylitis? Available online: https://www.cureus.com/articles/257423-can-generative-artificial-intelligence-enhance-health-literacy-about-lateral-epicondylitis#!/ (accessed on 10 August 2024).
  32. Rouhi, A.D.; Ghanem, Y.K.; Yolchieva, L.; Saleh, Z.; Joshi, H.; Moccia, M.C.; Suarez-Pierre, A.; Han, J.J. Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study. Cardiol. Ther. 2024, 13, 137–147. [Google Scholar] [CrossRef]
  33. Bell, J.; Condren, M. Communication Strategies for Empowering and Protecting Children. J. Pediatr. Pharmacol. Ther. JPPT 2016, 21, 176–184. [Google Scholar] [CrossRef]
  34. Freda, M.C. The Readability of American Academy of Pediatrics Patient Education Brochures. J. Pediatr. Health Care Off. Publ. Natl. Assoc. Pediatr. Nurse Assoc. Pract. 2005, 19, 151–156. [Google Scholar] [CrossRef]
  35. Okuhara, T.; Ishikawa, H.; Ueno, H.; Okada, H.; Kato, M.; Kiuchi, T. Readability Assessment of Vaccine Information: A Systematic Review for Addressing Vaccine Hesitancy. Patient Educ. Couns. 2022, 105, 331–338. [Google Scholar] [CrossRef] [PubMed]
  36. Levine, S.; Malone, E.; Lekiachvili, A.; Briss, P. Health Care Industry Insights: Why the Use of Preventive Services Is Still Low. Prev. Chron. Dis. 2019, 16, E30. [Google Scholar] [CrossRef] [PubMed]
  37. Rodriguez, F.; Ngo, S.; Baird, G.; Balla, S.; Miles, R.; Garg, M. Readability of Online Patient Educational Materials for Coronary Artery Calcium Scans and Implications for Health Disparities. J. Am. Heart Assoc. Cardiovasc. Cerebrovasc. Dis. 2020, 9, e017372. [Google Scholar] [CrossRef] [PubMed]
  38. Singh, S.P.; Ramprasad, A.; Luu, A.; Zaidi, R.; Siddiqui, Z.; Pham, T. Health Literacy Analytics of Accessible Patient Resources in Cardiovascular Medicine: What Are Patients Wanting to Know? Kans. J. Med. 2023, 16, 309–315. [Google Scholar] [CrossRef] [PubMed]
  39. Skrzypczak, T.; Mamak, M. Assessing the Readability of Online Health Information for Colonoscopy—Analysis of Articles in 22 European Languages. J. Cancer Educ. 2023, 38, 1865–1870. [Google Scholar] [CrossRef]
  40. Gagne, S.M.; Fintelmann, F.J.; Flores, E.J.; McDermott, S.; Mendoza, D.P.; Petranovic, M.; Price, M.C.; Stowell, J.T.; Little, B.P. Evaluation of the Informational Content and Readability of US Lung Cancer Screening Program Websites. JAMA Netw. Open 2020, 3, e1920431. [Google Scholar] [CrossRef]
  41. Lamb, L.R.; Baird, G.L.; Roy, I.T.; Choi, P.H.S.; Lehman, C.D.; Miles, R.C. Are English-Language Online Patient Education Materials Related to Breast Cancer Risk Assessment Understandable, Readable, and Actionable? Breast 2022, 61, 29–34. [Google Scholar] [CrossRef]
  42. Gu, J.Z.; Baird, G.L.; Escamilla Guevara, A.; Sohn, Y.-J.; Lydston, M.; Doyle, C.; Tevis, S.E.A.; Miles, R.C. A Systematic Review and Meta-Analysis of English Language Online Patient Education Materials in Breast Cancer: Is Readability the Only Story? Breast Edinb. Scotl. 2024, 75, 103722. [Google Scholar] [CrossRef]
  43. AlKhalili, R.; Shukla, P.A.; Patel, R.H.; Sanghvi, S.; Hubbi, B. Readability Assessment of Internet-Based Patient Education Materials Related to Mammography for Breast Cancer Screening. Acad. Radiol. 2015, 22, 290–295. [Google Scholar] [CrossRef]
  44. Parry, M.J.; Dowdle, T.S.; Steadman, J.N.; Guerra, T.R.; Cox, K.L. Pap Smear Readability on Google: An Analysis of Online Articles Regarding One of the Most Routine Medical Screening Tests. Int. J. Med. Stud. 2020, 8, 257–262. [Google Scholar] [CrossRef]
  45. Ngo, S.; Asirvatham, R.; Baird, G.L.; Sarraju, A.; Maron, D.J.; Rodriguez, F. Readability and Reliability of Online Patient Education Materials about Statins. Am. J. Prev. Cardiol. 2023, 16, 100594. [Google Scholar] [CrossRef] [PubMed]
  46. Shah, P.; Thornton, I.; Turrin, D.; Hipskind, J.E. Informed Consent. In StatPearls; StatPearls Publishing: Treasure Island, FL, USA, 2024. [Google Scholar]
  47. Lin, G.T.; Mitchell, M.B.; Hammack-Aviran, C.; Gao, Y.; Liu, D.; Langerman, A. Content and Readability of US Procedure Consent Forms. JAMA Intern. Med. 2024, 184, 214–216. [Google Scholar] [CrossRef] [PubMed]
  48. Massie, P.L.; Arshad, S.A.; Auyang, E.D. Readability of American Society of Metabolic Surgery’s Patient Information Publications. J. Surg. Res. 2024, 293, 727–732. [Google Scholar] [CrossRef] [PubMed]
  49. Zhang, D.; Earp, B.E.; Kilgallen, E.E.; Blazar, P. Readability of Online Hand Surgery Patient Educational Materials: Evaluating the Trend Since 2008. J. Hand Surg. 2021, 47, 186.E1–186.E8. [Google Scholar] [CrossRef]
  50. Eltorai, A.E.M.; Ghanian, S.; Adams, C.A.; Born, C.T.; Daniels, A.H. Readability of Patient Education Materials on the American Association for Surgery of Trauma Website. Arch. Trauma Res. 2014, 3, e18161. [Google Scholar] [CrossRef]
  51. Behmer Hansen, R.; Gold, J.; Lad, M.; Gupta, R.; Ganapa, S.; Mammis, A. Health Literacy among Neurosurgery and Other Surgical Subspecialties: Readability of Online Patient Materials Found with Google. Clin. Neurol. Neurosurg. 2020, 197, 106141. [Google Scholar] [CrossRef]
  52. Mohamed, A.A.; Ali, R.; Johansen, P.M. Readability of Neurosurgical Patient Education Resources by the American Association of Neurological Surgeons. World Neurosurg. 2024, 186, e734–e739. [Google Scholar] [CrossRef]
  53. Cherla, D.V.; Sanghvi, S.; Choudhry, O.J.; Liu, J.K.; Eloy, J.A. Readability Assessment of Internet-Based Patient Education Materials Related to Endoscopic Sinus Surgery. Laryngoscope 2012, 122, 1649–1654. [Google Scholar] [CrossRef]
  54. Nawaz, M.S.; McDermott, L.E.; Thor, S. The Readability of Patient Education Materials Pertaining to Gastrointestinal Procedures. Can. J. Gastroenterol. Hepatol. 2021, 2021, 7532905. [Google Scholar] [CrossRef]
  55. Ali, R.; Connolly, I.D.; Tang, O.Y.; Mirza, F.N.; Johnston, B.; Abdulrazeq, H.F.; Lim, R.K.; Galamaga, P.F.; Libby, T.J.; Sodha, N.R.; et al. Bridging the Literacy Gap for Surgical Consents: An AI-Human Expert Collaborative Approach. NPJ Digit Med. 2024, 7, 63. [Google Scholar] [CrossRef]
  56. Calderón, J.L.; Morales, L.S.; Liu, H.; Hays, R.D. Variation in the Readability of Items Within Surveys. Am. J. Med. Qual. Off. J. Am. Coll. Med. Qual. 2006, 21, 49–56. [Google Scholar] [CrossRef] [PubMed]
  57. Michel, C.; Dijanic, C.; Abdelmalek, G.; Sudah, S.; Kerrigan, D.; Gorgy, G.; Yalamanchili, P. Readability Assessment of Patient Educational Materials for Pediatric Spinal Deformity from Top Academic Orthopedic Institutions. Spine Deform. 2022, 10, 1315–1321. [Google Scholar] [CrossRef] [PubMed]
  58. Hackos, J.T.; Stevens, D.M. Standards for Online Communication: Publishing Information for the Internet, World Wide Web, Help Systems, Corporate Intranets; Wiley Computer Pub.: New York, NY, USA, 1997; Available online: https://demo.locate.ebsco.com/instances/af424a0f-0ee2-4d56-a732-396ed28edda0?option=subject&query=Invisibility (accessed on 13 August 2024).
  59. Golan, R.; Ripps, S.J.; Reddy, R.; Loloi, J.; Bernstein, A.P.; Connelly, Z.M.; Golan, N.S.; Ramasamy, R. ChatGPT’s Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study. Cureus 2023, 15, e42214. [Google Scholar] [CrossRef] [PubMed]
  60. Moons, P.; Van Bulck, L. Using ChatGPT and Google Bard to Improve the Readability of Written Patient Information: A Proof of Concept. Eur. J. Cardiovasc. Nurs. 2024, 23, 122–126. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Publications on readability and patient education indexed in PubMed have increased over time. As of 8 August 2024. The employed query was (“readability”[Title/Abstract] AND “patient education”[Title/Abstract] AND “surgery”[Title/Abstract]).
Figure 1. Publications on readability and patient education indexed in PubMed have increased over time. As of 8 August 2024. The employed query was (“readability”[Title/Abstract] AND “patient education”[Title/Abstract] AND “surgery”[Title/Abstract]).
Clinpract 14 00183 g001
Figure 2. Readability metrics are frequently used in clinical literature. Of note, numerous readability metrics are designed that go beyond the scope of this figure.
Figure 2. Readability metrics are frequently used in clinical literature. Of note, numerous readability metrics are designed that go beyond the scope of this figure.
Clinpract 14 00183 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, S.; Jamal, A.; Qureshi, F. Readability Metrics in Patient Education: Where Do We Innovate? Clin. Pract. 2024, 14, 2341-2349. https://doi.org/10.3390/clinpract14060183

AMA Style

Singh S, Jamal A, Qureshi F. Readability Metrics in Patient Education: Where Do We Innovate? Clinics and Practice. 2024; 14(6):2341-2349. https://doi.org/10.3390/clinpract14060183

Chicago/Turabian Style

Singh, Som, Aleena Jamal, and Fawad Qureshi. 2024. "Readability Metrics in Patient Education: Where Do We Innovate?" Clinics and Practice 14, no. 6: 2341-2349. https://doi.org/10.3390/clinpract14060183

APA Style

Singh, S., Jamal, A., & Qureshi, F. (2024). Readability Metrics in Patient Education: Where Do We Innovate? Clinics and Practice, 14(6), 2341-2349. https://doi.org/10.3390/clinpract14060183

Article Metrics

Back to TopTop