**7. Casting Doubt**

When the rest of the evidence converges on the conclusion that screening saves lives, even for women aged 40–49, why continue to include the poorly performed outlier study in evidence analyses? One can certainly speculate that there is strong motivation to perpetuate the use of studies such as CNBSS. The outlier creates doubt around the benefit of screening women 40–49 and keeps the mammography screening controversy alive. In fact, the various techniques used to challenge the benefits of mammographic screening have been extensively discussed by Dr. Daniel Kopans in his analyses [43,44].

Have we seen this pattern of perpetuating doubt for financial benefit in the past? In fact, this strategy is known as "manufactured doubt" and has been employed for decades by large organizations [45,46]. In its typical form, it is used by industry to delay regulation by creating doubt about whether evidence converges on a particular outcome. It was famously used by the tobacco industry to delay regulation for decades, while the industry continued to reap billions of dollars of profits. Other examples include the opiate, silicates, talc, diesel, alcohol, and sugar industries. Doubt is manufactured by stressing outlier studies (such as CNBSS), cherry-picking data (such as excluding all observational data), and many other methods.

Strategies for manufacturing doubt are well documented [47], as many of the abovementioned industries have undergone scrutiny and even litigation for these practices. The

following is a selection of known strategies employed to manufacture doubt, listed in the linked article https://ehjournal.biomedcentral.com/articles/10.1186/s12940-021-00723-0 (accessed on 26 May 2022). These have been correlated to examples of their use by the CTFPHC and other critics of screening. Keep in mind that the strategies were written with large commercial industries in mind, and the wording may not be fully applicable to governmen<sup>t</sup> and screening scenarios. Additionally, I limit most of my examples to breast screening recommendations.

**1. Attack study design**—Characterization of any studies that favour screening as flawed, frequently using CNBSS study as a comparator [48,49].

**2. Misrepresent data**—Cherry-picking or diluting the evidence by pooling poorand good-quality studies in meta-analyses and evidence review [23,50,51]. Continuing to include CNBSS is an example of this. Another example is also noted in the prostate screening literature, mentioned later. Overestimations of overdiagnosis [4,51,52] are also used to create fear and discourage screening.

**3. Suppress incriminating information**—Observational studies, many of which are more modern than the RCTs, demonstrate a large degree of effectiveness. These are, however, excluded from the evaluation of the benefits of screening mammography in CTFPHC analysis [23]. Despite this, observational studies and even questionnaires are permitted in the evaluation of harms.

**4. Contribute misleading literature**—The CTFPHC performed a review of women's values questionnaires [53], interpreted to sugges<sup>t</sup> women would not want to screen, even though the questionnaire review demonstrates that women do desire screening

**5. Host conferences or seminars**—In 1997, the National Cancer Institute held a Consensus Development Conference of the National Institutes of Health on "Breast Cancer Screening for Women Ages 40–49". Minority opinion was ignored, and the decision not to recommend screening for this age group was called "unanimous" [54].

**6. Blame other causes**—In the case of screening, rather than blame, benefits are attributed to other causes, particularly modern treatment [4,49,51].

**7. Invoke liberties/censorship/overregulation**—The recommendation not to screen women aged 40-49 is couched as "shared decision-making" [4], even though the CTF-PHC recommendations result in limitation of the option to screen women aged 40-49 in many jurisdictions.

**8. Define how to measure outcome/exposure**—The CTFPHC assesses mortality benefits only, ignoring well-documented non-mortality benefits associated with earlier diagnosis, such as decreased severity of treatments, as well as lower incidence of long-term complications, such as lymphedema in screened populations [55].

**9. Pose as a defender of health or truth**—The CTFPHC emphasizes harms and minimizes benefits, stressing anxiety, biopsies, and exaggerated overdiagnosis rates. While the recommendations appear to put the patient's emotional health first, they are paternalistic and represent a false equivalency in comparison with unnecessarily delayed diagnoses.

**10. Obscure involvement**—The unaccountable structure of the CTFPHC falls into this category.

**11. Normalize negative outcomes**—The CTFPHC stresses a lack of evidence of improvement in all-cause mortality (difficult to prove considering a relatively small proportion of the population dies of breast cancer [49,56]), minimizing the mortality benefits. This implies that excess deaths among non-screened women are acceptable. Additionally, the false equivalency of the potential harms (anxiety, biopsy, overdiagnosis) over the potential benefits of screening (lower likelihood of dying of breast cancer among those screened) normalizes avoidable breast cancer deaths.

**12. Attack Opponents (scientifically/personally)** —*Ad hominem* attacks on the motivation of dissenters, discussed earlier.

**13. Abuse of credentials**—Epistemic trespassing by non-content-experts, discussed earlier.

## **8. Broader Problems**

I have largely emphasized the problems with the 2018 CTFPHC breast cancer screening recommendations, but similar problems exist within many of the other major extant CTFPHC guidelines. In a personal correspondence, a prominent urologist mentioned inappropriate handling of prostate screening evidence for the 2014 guideline.

"There is a precise analogy [to CNBSS] in the prostate cancer field, the PLCO study [57] of PSA screening. 85% contamination in the control arm and 15% non-compliance in the study arm (this is documented and published) resulted in no difference in the proportion tested, and therefore no mortality difference between the 2 arms. The other large scale study, ERSPC (European Randomised Study of Screening for Prostate Cancer) [58], was strongly positive. The task force looked at the 2 studies, noted one was positive and one negative, and concluded that therefore no convincing evidence of benefit.

We pointed out the flaw in their reasoning with our 'stakeholders comments' in 2014 and we received no response from the task force, and no evidence that they took our comments into account.

Dr. Laurence Klotz, MD, FRCSC, CM Professor of Surgery, University of Toronto Sunnybrook Chair of Prostate Cancer Research Chairman, World Urologic Oncology Federation Chairman, SI (Stability Index) UCare Research Office9 Chairman, Canadian Urology Research Consortium Sunnybrook Health Sciences Centre"

Again, this indicates the pooling of poorly performed and well-performed research, creating doubt. Additionally, this demonstrates the lack of meaningful dialogue with highly qualified content experts. The use of the term "stakeholder" [59] is prejudicial, implying a material interest, or "stake", in the guidelines, rather than professional interest and a role as expert advisors. The term "topic advisor" is preferable and is used in the NICE UK methodology [60].

In fact, multiple other prominent specialists and specialist societies have written rebuttals to the CTFPHC guidelines, many of which are evidence-based [61–69] (Supplementary Materials).

#### **9. CTFPHC and the Suppression of Science**

Is there any evidence that the governmen<sup>t</sup> would deliberately suppress science? In fact, the Harper governmen<sup>t</sup> did exactly that in the late 2000s. Climate change and environmental scientists were muzzled, and environmental research was inhibited, culminating in a 2012 protest on Parliament Hill, nicknamed the Death of Evidence March [70,71]. Climate change and environmental science have an impact on development of fossil fuels and thus the Canadian economy. During approximately the same time period and under the same federal government, the current structure of CTFPHC was initiated in 2010 [72].

#### **10. Suggestions for Reform**

The lack of expert guidance in the performance of evidence review and the formation of guidelines is problematic. This requires urgen<sup>t</sup> reform, but CTFPHC requires a robust accountability structure for any reforms to take place. As it currently stands, the lack of expert guidance constitutes a breach of the public trust. The public should insist on fundamental reform to the structure of the CTFPHC. A new national guidelines body should be formed with appropriate oversight and accountability built in.

While COI is of serious concern, practising Canadian healthcare practitioners should not be conflated with "product defence" and other industry-funded experts. COI should be acknowledged for both content experts and for governmen<sup>t</sup> agencies' funding guidelines. COI should not, however, outweigh expertise and clinical experience. Ad hominem attacks on motivation should be avoided.

Any CTFPHC guidelines formed without fulsome expert guidance, particularly if Canadian content experts have provided evidence-based rebuttals, should be suspended from use pending content expert review and, if necessary, revision. In the interim, many national specialty societies have their own guidelines, which can be substituted for suspended CTFPHC recommendations.

Full disclosure of the credentials of personnel involved in evidence review and guideline formation is required for rebuilding trust in the processes.

Process transparency should be emphasized, and satisfaction surveys of panel members should be a mandatory element of guideline quality assessment. A tool such as PANELVIEW [73] could be adapted to this purpose.

Guideline quality should not only be evaluated based on adherence to guideline methodology, but also by outcomes. Following the USPSTF recommendation against PSA screening in 2012, metastatic prostate cancer increased, as predicted by modelling [74]. Outcomes follow-up should be mandatory following guideline recommendations, and this should be used to define guideline quality, rather than self-referential adherence to methodologies, which, as we have seen, may be misapplied or misrepresented.

Ethicists should be involved in the restructuring process of the CTFPHC, the formation of guidelines, and ongoing oversight of methodological processes. The Precautionary Principle [75] should be employed in all decisions that impact the well-being and lives of the population.

Where costs and other resource limitations are factored into guideline recommendations, this should be clearly disclosed. Science should not be manipulated to accommodate budgetary concerns.
