Next Article in Journal
Comparing the Diagnostic Performance of Micro-Ultrasound-Guided Biopsy Versus Multiparametric Magnetic Resonance Imaging-Targeted Biopsy in the Detection of Clinically Significant Prostate Cancer: A Systematic Review and Meta-Analysis
Previous Article in Journal
Glans Ischemia after Circumcision
 
 
Société Internationale d’Urologie Journal is published by MDPI from Volume 5 Issue 1 (2024). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Société Internationale d’Urologie.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Methodological Quality of Systematic Reviews for Questions of Therapy and Prevention Published in the Urological Literature (2016–2021) Fails to Improve

by
Maylynn Ding
1,†,
Jared Johnson
2,3,†,
Onuralp Ergun
2,3,
Gustavo Ariel Alvez
4 and
Philipp Dahm
2,3,*
1
Division of Urology, Department of Surgery, McMaster University, Hamilton, ON L8S 4L8, Canada
2
Department of Urology, University of Minnesota, Minneapolis, MN 55455, USA
3
Minneapolis VA Medical Center, Urology Section, Minneapolis, MN, USA
4
Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
*
Author to whom correspondence should be addressed.
Soc. Int. Urol. J. 2023, 4(5), 415-422; https://doi.org/10.48083/WURA1857
Submission received: 2 March 2023 / Revised: 2 March 2023 / Accepted: 13 May 2023 / Published: 19 September 2023

Abstract

:
Objectives: Prior studies have suggested that few systematic reviews (SRs) published in the urological literature provide reliable evidence. We performed this study to provide a longitudinal analysis of the methodological quality of SRs published in 5 major urology journals over a 6-year period (2016–2021). Methods: As an extension of a prior study with a written a priori protocol, we systematically searched and analyzed all SRs related to questions of therapy or prevention published in the 5 major urology journals. Three independent reviewers working in pairs selected eligible studies and abstracted the data in duplicate. We used the updated Assessment of Multiple Systematic Reviews (AMSTAR-2) instrument to assess SR quality. We performed pre-planned statistical hypothesis testing by time period and journal of publication in SPSS Version 27.0. Results: Our updated search (2019–2021) identified 563 references of which 114 ultimately met inclusion criteria, which we added to the database of the prior 144 studies (2016–2018). Overall, among 258 SRs, only 6 (2.3%) and 9 SRs (3.5%), achieved a “high” (no critical weakness; up to one non-critical weakness) or “moderate” (no critical weakness; more than one non-critical weakness) confidence rating, respectively. Most SRs published had very low confidence rating (195; 75.6%). The proportion of studies with a high or moderate rating (6.1% versus 4.9%; P = 0.481) did not increase over time. Conclusions: Most SRs published in the urological literature continue to have serious methodological limitations and should not be relied upon. There is a critical need for greater awareness for established methodological standards.

1. Introduction

Systematic reviews (SRs) are foundational to evidence-based clinical practice, informing individual decision-making at the point of care and serving as the foundation of clinical practice guidelines and health policy decisions [1]. They are being published in increasing numbers, in part because of their ability to generate a large number of citations. As a result, there are areas of medicine where the number of SRs exceeds that of the contributing individual studies being synthesized [2]. In addition, methodological quality of many reviews is modest, thereby undermining the confidence that readers should be able to place in the results.
In a previous publication, we applied the updated Assessment of Multiple Systematic Reviews (AMSTAR-2) instrument [3] to assess the methodological quality of SRs in the urological literature from 2016–2018 [4]. We found that only a small proportion of SRs achieved a “high” or “moderate” confidence rating according to AMSTAR-2 due to failure to meet basic quality criteria such as the provision of an a priori, registered protocol. We conducted this study to provide a longitudinal analysis of SRs quality over time by using identical methods to assess the subsequent 3 years (2019–2021).

2. Methods

This study was an extension of a prior published study [4] with a written a priori protocol. Given its focus on methodology, it could not be registered in PROSPERO. We performed a comprehensive search in PubMed that indexes all 5 included journals (BJU International, European Urology, The Journal of Urology, Urology, and World Journal of Urology) to identify all SRs published either electronically or in print between January 1, 2016, and December 31, 2018, by using the SR search filter under Clinical Queries. We extended this search through June 30, 2021, with an overlapping search start date of 3 months (October 1 to December 31, 2018) to make sure we included all SRs from that year that may have been indexed late. The references were then imported into Rayyan, a dedicated online software program for study screening. We included studies that self-identified themselves as SRs in the title, abstract, or methods and were related to clinical questions of therapy and/or prevention. Individual participant data meta-analyses and co-published, much abbreviated, Cochrane reviews were included. Health technology assessment reviews by a funding agency (eg, National Institute for Health and Care Research) were also included. We excluded SRs that related to diagnostic test accuracy studies, prognosis, or cost-effectiveness. We further excluded narrative reviews (see Table 1 for differences from systematic reviews) and clinical practice guidelines.
Three reviewers (M.D., J.J., and G.A.A.) working in pairs independently screened references in duplicate in 2 stages (title/abstract and full-text stage). Discrepancies were resolved by discussion and consensus; select discrepancies were resolved in consultation with the senior author (P.D.). Data abstraction was similarly performed by 2 of 3 reviewers (M.D., J.J., and G.A.A.) independently and in duplicate using a dedicated Google form based on the AMSTAR-2 instrument that was pilot-tested on 2 sets of 10 SRs up front. Discrepancies were once again resolved by discussion and consensus and secondary arbitration by the senior author (P.D.) in select cases. Each individual item was scored as “met,” “partially met,” or “not met,” as per AMSTAR-2 guidance. For the confidence ratings, we collapsed the “met” and “partially met” categories. Individual studies were then rated as “high,” “moderate,” “low,” or “critically low” quality based on the extent to which studies were determined to have met the 7 critical domains and 9 non-critical domains according to the AMSTAR-2 scoring guidance [3].
The primary outcome of this study was the proportion of studies classified into these 4 categories. Given the paucity of studies classified as “high” or “moderate,” we collapsed these categories for some of the reporting. We used descriptive statistics to calculate proportions and corresponding confidence intervals. Pre-planned statistical analysis was performed by time period and journal of publication using SPSS, version 27.0 with a predefined alpha of 0.05.

3. Results

Our search identified 563 references for the 2019–2021 time period, of which 114 ultimately met inclusion criteria (see Figure 1 for PRISMA flow diagram; see Online Supplement 1 for lists of included and excluded studies). The largest contributor by journal of publication for this time period was the World Journal of Urology (33; 28.9%), whereas it had been European Urology (53; 36.8%) from 2016 to 2018 (Table 2). Oncology (53; 46.5%) and voiding dysfunction (21; 18.4%) remained the 2 leading topic areas. Compared with the earlier time period, a larger proportion of studies (58.8% versus 38.9%; P < 0.001) included both randomized and non-randomized study designs. The number of included studies per SR was similar with medians of 16 (interquartile range: 9 to 31) and 20.0 (interquartile range: 8.75 to 42.75; P = 0.264) for the 2 time periods.
Figure 2 summarizes the reporting of the 7 critical AMSTAR-2 criteria comparing the 2 time periods. Overall, across both time periods, criterion #4, which refers to the comprehensiveness of the search, was met by the largest number of studies (241; 93.4%). Further details are summarized in Table 3: Of the 8 sub-criteria related to the comprehensiveness of the literature search, 2 of them, related to the search of trial registries (67.5% versus 31.9%; P < 0.001) and the search of the grey literature (37.7% versus 23.6%; P = 0.014) significantly improved. Criterion #7, which relates to the explanation and explicit referencing of all studies excluded at the full-text stage, was met by the lowest number of studies (28; 10.2%). Methodological quality changed significantly over time for 2 criteria: the proportion of studies with an a priori registered protocol (criterion #2) rose from 36.1% to 59.6% (P < 0.001) whereas those that considered the risk of bias when interpreting the study results (criterion #9) declined from 76.4% to 50.7% (P < 0.001). Other relatively large changes, for example, the decline in the proportion of studies assessing for the presence and likely impact of publication bias (criterion #15) were not statistically significant (here: P = 0.106).
With respect to non-critical criteria, criterion #1 that refers to an explicit PICO question was met by nearly all SRs (256; 99.2%) whereas criterion #10 which relates to the reporting of the funding sources of the individual studies being synthesized was met by the lowest number of SRs (19; 7.4%; Figure 3). Statistically significant changes over time in terms of improvement were seen for criterion #3 (explanation provided for included study designs) which improved from 54.9% to 73.7% (P = 0.001). Methodological quality declined based on criterion #8 (description of included studies in adequate detail) and criterion #12 (potential impact of risk of bias assessed if a meta-analysis was performed), which declined from 88.2% to 65.8% (P < 0.001) and 74.5% to 56.4% (P = 0.008), respectively.
Overall, only 6 (2.3%) and 9 SRs (3.5%), achieved a “high” (no critical weakness; up to one non-critical weakness) or “moderate” (no critical weakness; more than one non-critical weakness) confidence rating, respectively. Most SRs published had “very low” confidence ratings (195; 75.6%). The proportion of studies with a “high/moderate” confidence rating increased only slightly from 4.9% to 6.1% from 2016–2018 to 2019–2021(P = 0.481). BJU International (6/39; 15.4%) and the Journal of Urology (3/31; 9.7%) had a higher proportion of SRs with a “high/moderate” confidence rating than the other 3 journals, namely the World Journal of Urology (3/57; 5.3%), Urology (1/53; 1.9%), and European Urology (1/78; 1.3%; Figure 4).

4. Discussion

4.1. Statement of principal findings

This study provides the first longitudinal assessment of SRs published in the urological literature based on the updated AMSTAR-2 instrument. The main finding of this study is that approximately 3 of 4 SRs published in 5 major general urology journals continue to have a “critically low” confidence rating. Only about 1 in 20 reviews had a “high” or “moderate” confidence rating, with some journals faring better than others. Among the 7 critical criteria of the AMSTAR-2 instrument, criterion #7, which refers to the need to fully reference and explain the exclusion of studies at the full-text literature review stage was met by approximately 1 in 10 studies and did not improve over time. Although less than half of SRs had an a priori protocol (criterion #2), compliance for this criterion improved considerably by 2019–2021 to nearly 6 in 10 studies.

4.2. Strengths and weaknesses of the study

This study represents an extension of a previously published study with a written protocol. The use of the same methods and data abstraction form, and the involvement of 2 of the authors of the prior study facilitated consistent data abstraction and criteria interpretation. The assessment of SR methodological quality was performed using the AMSTAR-2, a widely used, validated instrument [3]. We once again pilot-tested the data abstraction form and completed all assessments independently and in duplicate. One potential limitation was that investigators were not blinded to the journal of publication and authors for each study, which could have potentially biased the ratings. Only the SRs published in 5 major urology journals were assessed in this study, therefore omitting reviews published in other urology journals or non-specialty journals. Our goal was not to conduct an exhaustive evaluation of all SRs in urology but rather examine the quality of the reviews published in 5 of the most prominent journals in our field. The AMSTAR-2 instrument was designed and validated only to appraise the methodological rigor of SRs related to questions of therapy and prevention [3]. Therefore, the conclusions of this study are only applicable to these types of SRs and not, for example, those for questions of diagnosis, prognosis, or cost-effectiveness. Fortunately, SRs for questions of therapy and prevention are the most common form of evidence synthesis. Lastly, AMSTAR-2 assesses only the quality of the methodological “handiwork” used to conduct a given SR, not how confident we can be in its result, which is the focus of frameworks such as Grading of Recommendations Assessment, Development, and Evaluation (GRADE) that assess the confidence in the estimates of effect [5]. Therefore, an SR with a “high” or “moderate” confidence rating (reflecting the methodological rigor that has gone into its development) may nevertheless provide evidence of very low certainty (according to GRADE) by virtue of the methodological limitations of the included studies, as well as issues related to indirection, indirectness, imprecision, and possible publication bias [6].

4.3. Strengths and weaknesses in relation to other studies

Previous studies have longitudinally assessed the methodological quality of SRs published in the urological literature from 1998 to 2015, documenting both a large increase in the number of reviews published each year and their modest quality [6,7,8]. Han et al. found mean AMSTAR scores ± standard deviations of 4.8 ± 2.0, 5.4 ± 2.3 and 4.8 ± 2.4 for the 1998–2008 (n = 57), 2009–2012 (n = 113), and 2013–2015 (n = 125) time periods, respectively suggesting no improvement over time [8]. Ding et al. [4] were the first to apply the updated AMSTAR-2 instrument that introduced confidence ratings (“high,” “moderate,” “low,” and “critically low”) to replace the AMSTAR score on a scale of 0 to 11 scale (with higher scores reflecting higher methodological quality)and found that most SRs published in the urological literature had a “low” or “critically low” confidence rating; the current study indicates that this has not changed.
Two published studies have assessed systematic review quality of urology-relevant studies with a specific clinical focus area. O’Kelly et al. [9] assessed 227 SRs in clinical pediatric urology published from 1992 to 2018, thereby applying it retrospectively to a time period before the AMSTAR-2 instrument had been developed. They found no study with a “high” confidence rating. “Moderate,” “low,” and “critically low” ratings were determined in 15%, 65%, and 20% of SRs, respectively. These results would indicate that 80% of SRs had no more than one critical flaw, which would be considerably better than what we found in a general sample of more recent SRs from some of the same journals. This would suggest that pediatric urology authors are more compliant than those in other fields of urology. However, we also worry that this discrepancy may be due to different interpretation and application of the AMSTAR-2 criteria in scoring SRs. Second, Bole et al. [10] investigated 17 studies on Peyronie’s disease and rated 65% (11/17) of studies to have a “critically low” confidence rating. These findings correspond largely with ours, as did the authors’ conclusions that many SR’s “fail to meet accepted methodological criteria” [10].
Studies from outside the field of urology using AMSTAR-2 have also found serious methodological limitations [11]. For example, Dettori et al. assessed 28 SRs from 2018 related to spinal surgery and rated most (26/28; 93%) as “critically low” and the remainder (2/28; 7%) as “low” [12]. Martinez-Monedoro et al. found that nearly all (95%) of SRs published in the 10 highest-impact otolaryngology journals from 2012 to 2017 were reviews of “critically low” confidence [13]. Yu et al. evaluated 141 SRs in surgery and found 2.8%, 2.1%, 5.7% and 89.5% to have a “high,” “moderate,” “low,” and “critically low” confidence rating, respectively [14]. In a methodological review of SRs from 2010 to 2019 related to advanced cancer, most (230/261; 85.1%) had a “critically low” confidence rating [15]. This study also provided detailed information on what criteria were not met, which closely mirrored our findings: The majority (209/261; 80.1%) of studies were classified as “critically low” because of lack of an a priori protocol (criterion #2; 222, 85.1%) and failure to reference excluded full-text studies and providing justifications for exclusion (criterion #4; 218, 83.5%). Lastly, in a comparative study of recent updates of previously published SRs, 96.7% had a rating of “critically low” [16]. In aggregate, these studies provide compelling evidence that our findings are not unique to the specialty of urology, but the issue of poor SR quality is widespread in the health sciences.

4.4. Implications for clinicians and policymakers

SRs have a preeminent role in evidence-based clinical practice [17]: They not only are used by healthcare providers to inform decision-making for individual patients, but also provide the foundation for evidence-based clinical practice guidelines such as those by the American Urological Association (AUA) and the European Association of Urology (EAU) as well as policy decision-making [18,19]. For that reason, they need to provide a trustworthy synthesis of the individual studies informing a given clinical question. Our study indicates that this is rarely the case. SRs do not provide reliable evidence summaries due to the authors’ failure to adhere to established methodological standards. This contributes to avoidable research waste, with the potential to distract and misguide patient care decisions [2,20]. It therefore appears imperative to raise awareness for the criteria that determine the quality of a SR. These include a written protocol that is registered in advance and describes all relevant aspects of the SR development including the search, study selection, risk of bias assessment, meta-analysis (if appropriate) and interpretation. Ideally, the authors should report the certainty of evidence on a per-outcome basis using GRADE, which has become the de facto standard for determining the confidence in the estimates of effect and has been used in evidence synthesis underpinning the AUA guidelines [5]. It is encouraging to see that the proportion of SRs in the urological literature with a protocol has improved substantially, likely due to the explicit requirements by individual journals [21,22]. Introduction of an AMSTAR-2 checklist that editors could apply to every SR submission to any of these major urological journals might help address this issue.

4.5. Unanswered questions and future research

Multiple studies have highlighted the issue of low SR quality across many arenas of medicine including urology but appear to have been unable to impact a meaningful improvement. Future studies that identify the barriers to higher quality SR production and more rigorous evaluation on the part of authors and editors may be helpful for identifying barriers for change. Authorship analyses that seek to identify whether the involvement of individuals with specific methodological expertise or information specialist engagement [23] improves SR quality may yield opportunities for targeted interventions. Lastly, similar studies should continue to monitor the SR quality until there is assurance of consistent, higher quality content generation.

5. Conclusions

Most SRs published in the urological literature continue to have serious methodological limitations and should not be relied upon. There is a critical need for raising awareness among SR authors, journal editors, and consumers of the urological literature on established standards for high-quality SR to promote improvement.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/2563-6499/4/5/415/s1.

Acknowledgements

Funding: Authors report no funding for this study.
Ethical Compliance: As a methodological review of the published literature this study was not subject to IRB approval.
Author Contributions in CRediT Format: Maylynn Ding: Data abstraction, writing – review & editing; Jared Johnson: Data abstraction, writing, review, & editing; Onuralp Ergun: writing – review & editing; Gustavo Ariel Alvez: Data abstraction; Philipp Dahm: Conceptualization, analyses, writing – drafting, review & editing.
Data Availability: The data collected and analyzed during this study are available from the corresponding author upon request.

Conflicts of Interest

None declared.

Abbreviations

AMSTAR-2Assessment of Multiple Systematic Reviews Version 2
AUA American Urological Association
EAU European Association of Urology
GRADE Grading of Recommendations Assessment, Development, and Evaluation
SRssystematic reviews

References

  1. Dickersin, K. Health-care policy. To reform U.S. health care, start with systematic reviews. Science 2010, 329, 516–517. [Google Scholar] [CrossRef] [PubMed]
  2. Ioannidis, J.P. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016, 94, 485–514. [Google Scholar] [CrossRef]
  3. Shea, B.J.; Reeves, B.C.; Wells, G.; Thuku, M.; Hamel, C.; Moran, J.; et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017, 358, j4008. [Google Scholar] [CrossRef]
  4. Ding, M.; Soderberg, L.; Jung, J.H.; Dahm, P. Low methodological quality of systematic reviews published in the urological literature (2016–2018). Urology 2020, 138, 5–10. [Google Scholar] [CrossRef] [PubMed]
  5. Gonzalez-Padilla, D.A.; Dahm, P. Evidence-based urology: understanding GRADE methodology. Eur Urol Focus. 2021, 7, 1230–1233. [Google Scholar] [CrossRef]
  6. Canfield, S.E.; Dahm, P. Rating the quality of evidence and the strength of recommendations using GRADE. World J Urol. 2011, 29, 311–7. [Google Scholar] [CrossRef]
  7. Corbyons, K.; Han, J.; Neuberger, M.M.; Dahm, P. Methodological quality of systematic reviews published in the urological literature from 1998 to 2012. J Urol. 2015, 194, 1374–9. [Google Scholar] [CrossRef] [PubMed]
  8. Han, J.L.; Gandhi, S.; Bockoven, C.G.; Nararyan, V.M.; Dahm, P. The landscape of systematic reviews in urology (1998 to 2015): an assessment of methodological quality. BJU Int. 2017, 119, 638–649. [Google Scholar] [CrossRef]
  9. O’Kelly, F.; DeCotiis, K.; Aditya, I.; Braga, L.H.; Koyle, M.A. Assessing the methodological and reporting quality of clinical systematic reviews and meta-analyses in paediatric urology: can practices on contemporary highest levels of evidence be built? J Pediatr Urol. 2020, 16, 207–217. [Google Scholar] [CrossRef]
  10. Bole, R.; Gottlich, H.C.; Ziegelmann, M.J.; Corrigan, D.; Levine, L.A.; Mulhall, J.P.; et al. a critical analysis of reporting in systematic reviews and meta-analyses in the peyronie’s disease literature. J Sex Med. 2022, 19, 629–640. [Google Scholar] [CrossRef]
  11. Bojcic, R.; Todoric, M.; Puljak, L. Adopting AMSTAR 2 critical appraisal tool for systematic reviews: speed of the tool uptake and barriers for its adoption. BMC Med Res Methodol. 2022, 22, 104. [Google Scholar] [CrossRef] [PubMed]
  12. Dettori, J.R.; Skelly, A.C.; Brodt, E.D. Critically low confidence in the results produced by spine surgery systematic reviews: an AMSTAR-2 evaluation from 4 spine journals. Global Spine J. 2020, 10, 667–673. [Google Scholar] [CrossRef] [PubMed]
  13. Martinez-Monedero, R.; Danielian, A.; Angajala, V.; Dinalo, J.E.; Kezerian, E.J. Methodological quality of systematic reviews and meta-analyses published in high-impact otolaryngology journals. Otolaryngol Head Neck Surg. 2020, 163, 892–905. [Google Scholar] [CrossRef] [PubMed]
  14. Yu, J.; Yang, Z.; Zhang, Y.; Cui, Y.; Tang, J.; Hirst, A.; et al. The methodological quality on systematic reviews of surgical randomised controlled trials: a cross-sectional survey. Asian J Surg. 2022, 45, 1817–1822. [Google Scholar] [CrossRef]
  15. Siemens, W.; Schwarzer, G.; Rohe, M.S.; Buroh, S.; Meerpohl, J.J.; Becker, G. Methodological quality was critically low in 9/10 systematic reviews in advanced cancer patients-a methodological study. J Clin Epidemiol. 2021, 136, 84–95. [Google Scholar] [CrossRef] [PubMed]
  16. Gao, Y.; Cai, Y.; Yang, K.; Liu, M.; Shi, S.; Chen, J.; et al. Methodological and reporting quality in non-Cochrane systematic review updates could be improved: a comparative study. J Clin Epidemiol. 2020, 119, 36–46. [Google Scholar] [CrossRef] [PubMed]
  17. Tsen g, T.Y.; Dahm, P.; Poolman, R.W.; Preminger, G.M.; Canales, B.J.; Montori, V.M.; et al. How to use a systematic literature review and meta-analysis. J Urol. 2008, 180, 1249–56. [Google Scholar] [CrossRef]
  18. Faraday, M.; Hubbard, H.; Kosiak, B.; Dmochowski, R.; et al. Staying at the cutting edge: a review and analysis of evidence reporting and grading; the recommendations of the American Urological Association. BJU Int. 2009, 104, 294–297. [Google Scholar] [CrossRef] [PubMed]
  19. Knoll, T.; Omar, M.I.; Maclennan, S.; Hernández, V.; Canfield, S.; Yuan, Y.; et al. Key steps in conducting systematic reviews for underpinning clinical practice guidelines: methodology of the European Association of Urology. Eur Urol. 2018, 73, 290–300. [Google Scholar] [CrossRef]
  20. Glasziou, P.; Altman, D.G.; Bossuyt, P.; Boutron, I.; Clarke, M.; Julious, S.; et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014, 383, 267–276. [Google Scholar] [CrossRef]
  21. Dahm, P. Raising the bar for systematic reviews with Assessment of Multiple Systematic Reviews (AMSTAR). BJU Int. 2017, 119, 193. [Google Scholar] [CrossRef] [PubMed]
  22. Jung, J.H.; Dahm, P. Reaching for the stars - rating the quality of systematic reviews with the Assessment of Multiple Systematic Reviews (AMSTAR) 2. BJU Int. 2018, 122, 717–718. [Google Scholar] [CrossRef] [PubMed]
  23. Koffel, J.B. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One 2015, 10, e0125931. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA flow diagram of literature search.
Figure 1. PRISMA flow diagram of literature search.
Siuj 04 00415 g001
Figure 2. Proportion of studies meeting critical AMSTAR-2 criteria comparing 2016–18 with 2019–2021.
Figure 2. Proportion of studies meeting critical AMSTAR-2 criteria comparing 2016–18 with 2019–2021.
Siuj 04 00415 g002
Figure 3. Percentage Proportion of studies meeting non-critical AMSTAR-2 criteria comparing 2016–18 with 2019–2021.
Figure 3. Percentage Proportion of studies meeting non-critical AMSTAR-2 criteria comparing 2016–18 with 2019–2021.
Siuj 04 00415 g002
Figure 4. Proportion of studies with “high/moderate”, “low” or “critically low” confidence rating by journal of publication.
Figure 4. Proportion of studies with “high/moderate”, “low” or “critically low” confidence rating by journal of publication.
Siuj 04 00415 g004
Table 1. Differences between narrative and systematic reviews.
Table 1. Differences between narrative and systematic reviews.
Siuj 04 00415 i001
Table 2. Table of baseline characteristics of 258 systematic reviews published between 2016 and 2021.
Table 2. Table of baseline characteristics of 258 systematic reviews published between 2016 and 2021.
Siuj 04 00415 i002
Table 3. Details of reported literature search informing adequacy of the literature search (criterion #4).
Table 3. Details of reported literature search informing adequacy of the literature search (criterion #4).
Siuj 04 00415 i003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, M.; Johnson, J.; Ergun, O.; Alvez, G.A.; Dahm, P. Methodological Quality of Systematic Reviews for Questions of Therapy and Prevention Published in the Urological Literature (2016–2021) Fails to Improve. Soc. Int. Urol. J. 2023, 4, 415-422. https://doi.org/10.48083/WURA1857

AMA Style

Ding M, Johnson J, Ergun O, Alvez GA, Dahm P. Methodological Quality of Systematic Reviews for Questions of Therapy and Prevention Published in the Urological Literature (2016–2021) Fails to Improve. Société Internationale d’Urologie Journal. 2023; 4(5):415-422. https://doi.org/10.48083/WURA1857

Chicago/Turabian Style

Ding, Maylynn, Jared Johnson, Onuralp Ergun, Gustavo Ariel Alvez, and Philipp Dahm. 2023. "Methodological Quality of Systematic Reviews for Questions of Therapy and Prevention Published in the Urological Literature (2016–2021) Fails to Improve" Société Internationale d’Urologie Journal 4, no. 5: 415-422. https://doi.org/10.48083/WURA1857

Article Metrics

Back to TopTop