Next Article in Journal
Physiotherapy Methods Applied in the Prevention of Functional Loss Associated with Human T-Lymphotropic Virus 1 Infection: An Overview
Previous Article in Journal
Antimicrobial Peptides Demonstrate Activity against Resistant Bacterial Pathogens
 
 
Brief Report
Peer-Review Record

Evaluation of Self-Collected Versus Health Care Professional (HCP)-Performed Sampling and the Potential Impact on the Diagnostic Results of Asymptomatic Sexually Transmitted Infections (STIs) in High-Risk Individuals

Infect. Dis. Rep. 2023, 15(5), 470-477; https://doi.org/10.3390/idr15050047
by Simon Weidlich 1,*, Sven Schellberg 2, Stefan Scholten 3, Jochen Schneider 1, Marcel Lee 1, Kathrin Rothe 4, Nina Wantia 4, Christoph D. Spinner 1,5 and Sebastian Noe 1,6
Reviewer 1: Anonymous
Reviewer 2:
Infect. Dis. Rep. 2023, 15(5), 470-477; https://doi.org/10.3390/idr15050047
Submission received: 29 July 2023 / Revised: 20 August 2023 / Accepted: 23 August 2023 / Published: 25 August 2023
(This article belongs to the Section Sexually Transmitted Diseases)

Round 1

Reviewer 1 Report

This paper presents an analysis of data to compare self-collected versus healthcare-provider-collected samples for STI testing at three anatomical sites (oropharynx, urethra, rectum). Overall, the study is well described. The primary concern I have is the use of a combined measure (i.e., positive on either self- or provider-collected sample) as the 'gold standard' for the sensitivity calculation. This seems rather unorthodox. Why not use the provider-collected samples as the gold standard to compare the self-collected samples to? Additionally, it would be worthwhile to present the specificity of the self-collected samples in addition to sensitivity. 

Another bias worth noting is that the participants were recruited from patients seeking sexual health services; therefore, willingness to self-administer specimen collection among this population might be higher than the general population. 

I was unable to review the supplementary materials. The supplementary file appears to just be the manuscript.

Author Response

Reply to reviewer 1:

While we understand the concern about using the approach of a combined measure, we think it is worth to point out, that for both HCP- as well as patient-provided samples, the “gold-standard” of diagnostics, namely PCR, is used. A PCR-confirmed infection should in our eyes be considered equally “valid” regardless of the way of pre-analytic sampling. In particular, if the HCP-collected samples would be considered as a standard, this would mean that positive results obtained by a patient but not the HCP would be ignored instead of considering the possibility that patient-performed sampling might have been more efficient. Therefore, results would always be biased unfavorably for the results obtained by patients. As the main question of the study is to evaluate how preclinical sampling influences the result of the test, we do think our approach is valid and hope that this explanation could add to its understanding.

We completely agree that adding specificities to the results might be of interest to the readers and therefore added them accordingly for each collection site in Figure 2. We appreciate these suggestions as we think it contributes to relevant information of the manuscript.

We do also agree that one bias on user acceptance for self-sampling might be higher in our cohort than in the general population, even though self-sampling might be even more convenient for individuals who do not see an HCP on a regular base.

We apologize for uploading the wrong file for the supplementary material, which was already corrected and the right file was uploaded.

Reviewer 2 Report

Overall this is a well written paper and covers the important topic of patient collected STI testing. The study adds to the evidence that already exists regarding the validity and acceptability of self-collection, ever more relevant in the post pandemic world. 

Specific comments:

1. Under methods, would make distinct sub-headings - study population, test platforms, statistical methods etc. 

2. Authors define sensitivity as ratio of positive tests by the method of choice divided by all positive tests combined. That is not an accurate method of calculating sensitivity which is a ratio of true positives over TP + FP. The biggest problem with this study is that there is no gold standard for a positive test. In other studies that validate self-collection, DNA sequencing is typically used as the gold standard and a way to resolve discordant results. Would report on concordance or inter-reliability but without a gold standard, you cannot define a TP or a FP therefore you cannot report on sensitivity. 

3. Under results, discordance was reported but it was not explained how discordance was resolved. In other words, which test is correct?

4. line 147-150 - Rates of invalid results are very different between the two collection methods with HCP collected swabs having much higher rates of invalid results. Is there an explanation for this? Especially because one would think that the self-collected method would have higher rates of invalidity and sample collection error. This is something that should be discussed in the discussion. 

Overall English language use is very good, grammatically sound. There are minor revisions one could make where casual language is used instead of formal language which is customary in written language but this is not necessary. 

Author Response

Reply to reviewer 2:

We thank reviewer 2 for the suggestion of subdividing the methods section if this is in line with the journal’s formatting requirements as we think it adds to the readability.

For point number two we have to disagree with the definition of sensitivity, as the definition given by reviewer 2 is the positive predictive value (assuming that TP is “true positive” and FP is “false positive”).

The sensitivity is the ratio of “true positives” divided by the sum of “true positives” and “false negatives”. “False negative” in the design of this study is the amount of tests that are identified as “negative” in the test under investigation, but “positive” in the comparator – which adds up to the sum of positive tests in at least one of the both sampling techniques, which is the definition we used.

For the further parts of comments 2 and 3 we want to point out that the method used for testing the samples is a DNA-amplification technique with approval for clinical use and the primary question of the study was to evaluate the influence of pre-analytic sampling on the results rather than to evaluate the test itself. As there is no plausible reason to assume that the probability of false-positive results will differ between sampling techniques, we assumed the detection of specific DNA by the test to indicate ‘true’ infection. In case of discordant results, the ‘positive’ result was considered to be ‘true’. While there might therefore be a biased result of the prevalence towards higher numbers of infections, this approach prevents biased results for the comparison of the two techniques.

We agree that the topic of more invalid (especially rectal) tests when performed by HCP. This could be because sampling might be performed more tentative when done on another individual than when done on oneself, leading to higher amounts of invalid test. A sentence was added in the discussion section.

 

Back to TopTop