Next Article in Journal
A Case Showing a New Diagnostic Aspect of the Application of Radiofrequency Echographic Multi-Spectrometry (REMS)
Next Article in Special Issue
A Comparative Evaluation of HbA1c Measurement Methods and Their Implications for Diabetes Management
Previous Article in Journal
A Multi-Disciplinary MRI Assessment May Optimize the Evaluation of Chondral Lesions in Acute Ankle Fractures: A Prospective Study
Previous Article in Special Issue
Performance Evaluation of a BZ COVID-19 NALF Assay for Rapid Diagnosis of SARS-CoV-2
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Repeating COVID-19 Rapid Antigen Tests on Prevalence Boundary Performance and Missed Diagnoses

1
Pathology and Laboratory Medicine, School of Medicine, University of California, Davis, CA 95616, USA
2
Point-of-Care Testing Center for Teaching and Research (POCT•CTR), Knowledge Optimization, Davis, CA 95616, USA
Diagnostics 2023, 13(20), 3223; https://doi.org/10.3390/diagnostics13203223
Submission received: 6 September 2023 / Revised: 7 October 2023 / Accepted: 10 October 2023 / Published: 16 October 2023
(This article belongs to the Special Issue 21st Century Point-of-Care, Near-Patient and Critical Care Testing)

Abstract

:
A prevalence boundary (PB) marks the point in prevalence in which the false omission rate, RFO = FN/(TN + FN), exceeds the tolerance limit for missed diagnoses. The objectives were to mathematically analyze rapid antigen test (RAgT) performance, determine why PBs are breeched, and evaluate the merits of testing three times over five days, now required by the US Food and Drug Administration for asymptomatic persons. Equations were derived to compare test performance patterns, calculate PBs, and perform recursive computations. An independent July 2023 FDA–NIH–university–commercial evaluation of RAgTs provided performance data used in theoretical calculations. Tiered sensitivity/specificity comprise the following: tier (1) 90%, 95%; tier (2) 95%, 97.5%; and tier (3) 100%, ≥99%. Repeating a T2 test improves the PB from 44.6% to 95.2% (RFO 5%). In the FDA–NIH-university–commercial evaluation, RAgTs generated a sensitivity of 34.4%, which improved to 55.3% when repeated, and then improved to 68.5% with the third test. With RFO = 5%, PBs are 7.37/10.46/14.22%, respectively. PB analysis suggests that RAgTs should achieve a clinically proven sensitivity of 91.0–91.4%. When prevalence exceeds PBs, missed diagnoses can perpetuate virus transmission. Repeating low-sensitivity RAgTs delays diagnosis. In homes, high-risk settings, and hotspots, PB breaches may prolong contagion, defeat mitigation, facilitate new variants, and transform outbreaks into endemic disease. Molecular diagnostics can help avoid these potential vicious cycles.

1. Introduction

The high specificity of Coronavirus disease-19 (COVID-19) rapid antigen tests (RAgTs) helps minimize false positives, although at very low prevalence (e.g., <2%), they may appear [1,2,3,4,5,6,7,8,9,10,11]. However, RAgTs fail to reliably rule out infections because poor clinical sensitivity produces false-negative results [12,13,14]. The prevalence boundary (PB) is defined as the prevalence at which the rate of false omissions, RFO = FN/(TN + FN), exceeds a specified threshold, such as 5% or 1 in 20 diagnoses missed because of false negatives (FNs).
The objectives of this research are to mathematically reveal patterns of RAgT performance, to understand intrinsic limitations imposed on RAgT performance, to determine why and where PBs are breeched, and to evaluate the merits of repeating RAgTs twice at intervals of 48 h over 5 days for a total of three tests, which is the temporal protocol now required by the US Food and Drug Administration (FDA) for people who are asymptomatic. The overall goal is to develop a sound mathematical basis for improving RAgTs and designing new ones well in advance of the next pandemic.

2. Methods

2.1. Viewpoint

The viewpoint here is post hoc Bayesian conditional probability, that is, the perspective of the healthcare provider or self-testing layperson who must judge whether a positive COVID-19 test result is believable, and likewise, decide whether or not to trust a negative test result to rule out infection. For quantitative 2 × 2 tables illustrating how prevalence affects test performance and generates false negatives, please see Tables 3–6 in reference [1].

2.2. Performance Tiers and Mathematical Foundations

Table 1 presents the performance tiers and quantitates the effects of repeating tests on prevalence boundaries when RFO is 5% [1,13,14]. Tier 2 performance produces a PB of 95.2% upon repeating a test, which is numerically about the same as the sensitivity of 95%. A tier 3 test has clinical sensitivity [TP/(TP + FN)] of 100% and thus has no FNs. It does not need to be repeated unless it is useful to confirm positive test results.
Table 2 lists the equations, dependent variables, and independent variables used to graph RFO versus prevalence [Equation (21)] and the gain in PB (∆PB) [Equation (26)] following repetition of a test versus its sensitivity. Please note that the righthand side of Equation (26) for ∆PB does not depend on prevalence, per se. A graph based on Equation (26) shows how ∆PB changes as a function of sensitivity.
Table 2 also lists the equations used to determine the RFO for a repeated test (RFO/rt) [Equation (22)], the PB for one test given the RFO [Equation (24)], and the PB for a repeated test (PBrt) given the RFO [Equation (25)]. Equations (22), (25), and (26) were newly derived and verified for this research.

2.3. Prevalence Boundaries

A prevalence boundary is encountered where the RFO curve (plotted as a function of prevalence) intersects the threshold for missed diagnoses. This paper uses a primary threshold of 5%, but it also illustrates the effects of RFO thresholds of 10%, 20%, and 33%. Equation (21) is used to calculate the RFO, and Equation (22) is used to calculate the RFO/rt when a test is repeated. Please note that ∆PB [Equation (26)], the gain in PB from repeating a test, depends only on the sensitivity, specificity, and RFO.

2.4. FDA, NIH, University, and Commercial RAgT Field Evaluation (the “Collaborative Study”)

Rapid antigen test results for an asymptomatic “DPIPP 0–6” group in a collaborative study of COVID-19 diagnosis conducted by Soni et al. were published in a preprint in 2022 [15] and in July 2023 in a peer-reviewed journal [16]. The study was conducted from 18 October 2021 to 31 January 2022. It included subjects over two years of age and was funded by the Rapid Acceleration and Diagnostics (RADx) initiative of the NIH.
The collaborative study involved 7361 participants with 5609 deemed eligible for analysis in the preprint [15] and 5353 in the peer-reviewed paper [16]. Participants who were symptomatic and negative for SARS-CoV-2 on study day one were eligible. In total, 154 participants had at least one positive RT-PCR result. The collaborative study was approved by the WIRB-Copernicus Group Institutional Review Board (20214875) [16].
Participants were eligible to enroll through a smartphone app if they had not had a SARS-CoV-2 infection in the prior three months, had been without any symptoms in the fourteen days before enrollment, and were able to drop off prepaid envelopes with nasal swab samples at their local FedEx drop-off location.
People self-tested and self-interpreted RAgT results during the spread of SARS-CoV-2 Delta and Omicron variants. The results, which reflected the first week of testing, were used here to analyze repeated RAgT performance. Soni et al. [16] pooled RAgT performance results across the tests used while assuming similar sensitivity for viral loads and thereby created an adequate sample size to fulfill the goals of the FDA–NIH–university–commercial consortium.
Institutions that participated in the collaborative study consortium comprised the US FDA, National Institute of Biomedical Imaging and Bioengineering at the NIH, University of Massachusetts Chan Medical School, Johns Hopkins School of Medicine, and Northwestern University. Quest Diagnostics and CareEvolution also joined the evaluation.

2.5. Rapid Antigen Tests in the Collaborative Study

The RAgTs used in the collaborative study under FDA emergency use authorizations [17] (EUAs) included the (a) Abbott BinaxNOW Antigen Self Test [EUA positive percent agreement (PPA), 84.6%; negative percent agreement (NPA), 98.5%]; (b) Quidel QuickVue OTC COVID-19 Test [PPA, 83.5%; NPA, 99.2%]; and (c) BD Veritor At-Home COVID-19 Test [PPA, 84.6%; NPA, 99.8%]. The EUA NPA range from 98.5 to 99.8% has a narrow span of only 1.3%.
The collaborative study preprint [15] did not report clinical specificity results. The peer-reviewed paper stated that 3.4% (1182) of same-day RT-PCR negative results “were missing a corresponding Ag-RDT result” [16]. Then, the authors estimated the clinical specificity [TN/(TN + FP)] to be 99.6%. Rather than using an estimated specificity, the median EUA NPA of 99.2% was used here for mathematical analyses.
In a study of COVID-19 test performance [18], the median NPA of EUA manufacturer claims for home RAgTs was 99.25% (range 97–100%), which is nearly identical to the 99.2% used here for math computations. For commercial EUA NPA details, please see “Table S1, Part I. Antigen tests, Statistics” [19] in the supplement to reference [18].

2.6. FDA Directive for Rapid Antigen Tests

The collaborative study preprint [15] was followed by a letter from the US FDA titled “Revisions Related to Serial (Repeat) Testing for the EUAs (Emergency Use Authorizations) of Antigen IVDs” [20], published 1 November 2022. Appendix A in the FDA letter states “(1) Where a test was previously authorized for testing of symptomatic individuals (e.g., within the first [number specific to each test] days of symptom onset), the test is now authorized for use at least twice over three days with at least 48 h between tests.”, and “(2) Where a test was previously authorized for testing of asymptomatic individuals (e.g., individuals without symptoms or other epidemiological reasons to suspect COVID-19), the test is now authorized for use at least three times over five days with at least 48 h between tests”.
Intended use EUA documents describing RAgTs must now declare that “negative results are presumptive”, and no longer specify that testing should be performed at least twice over 2–3 days with 24–36 h between tests. Product labeling must be updated along with instructions for users and other manufacturer documents. This research focuses on the interpretation of test results for asymptomatic subjects (FDA no. 2 above).

2.7. Software and Computational Design

Desmos Graphing Calculator v1.9 [https://www.desmos.com/calculator (accessed on 5 September 2023)], which is a free multivariate open access software, was used to generate illustrations so that readers could duplicate the graphical results and explore their analytic goals at no expense. Mathematica [Wolfram, https://www.wolfram.com/mathematica/ (accessed on 5 September 2023), ver. 13.3] was used to confirm the (x, y) coordinates of graphical intersections and other analytical results.

2.8. Human Subjects

Human subjects were not involved in the mathematical analyses. Sensitivity and specificity data used here were obtained from public-domain-published sources [15,16].

3. Results

Figure 1 shows the changes in the false omission rates, RFO, as a function of prevalence from 0 to 100%. The red curves reflect the results of testing three times for asymptomatic self-testers participating in the collaborative study. Please see the inset table for details.
Repeating RAgTs improved sensitivity from an initial 34.4% to 55.3% on the first repetition and 68.5% on the second repetition when singleton RT-PCR positives were included. The initial PB was 7.37% (red dot). However, subsequent PBs (10.46%, 2nd test; 14.22%, 3rd test) did not match the theoretical predictions for repetitions.
In community settings and hotspots with prevalence >7.37%, the RFO curves predict that more than 1 in 20 diagnoses will be missed with the first test, while with the second and third tests, RFO breeches will occur at 10.46% and 14.22% prevalence, respectively.
Relaxation of the RFO threshold to 10%, 20%, and 33.3% for the third test generates unacceptable levels of missed diagnoses (1 in 10, 1 in 5, and 1 in 3, respectively) as the PB moves up and to the right at 25.9, 44.1, and 61.2% prevalence, indicated by the red symbols on the exponentially increasing red curve for the third test.
The second repetition of the RAgTs (third RAgT) did not achieve the World Health Organization (WHO) performance criteria [21] (blue dot and curve, Figure 1) for RAgT sensitivity of 80% and generated a PB of only 14.22% (RFO = 5%), which is 69.9% of the PB (20.34%) calculated (using Equation (24)) for the WHO specifications.
The highest levels of performance in Figure 1 were attained by the single home molecular loop-mediated isothermal amplification (LAMP, purple curve) assay median performance [22] (sensitivity 91.7%, specificity 98.4%, and PB 38.42%), the mathematically predicted performance of a tier 2 test (PB 50.6%, green curve), and the tier 2 repeated test (PB 95.2%, large green dot). Tier 2 sensitivity is 95%; for RFO = 5%, the predicted PB would increase by 44.6% to 95.2% when the test is repeated.
Figure 2 displays gain in the prevalence boundary, ∆PB, on the vertical (y) axis versus the sensitivity of the test on the horizontal (x) axis. Initially, the ∆PB curve is relatively shallow. As sensitivity increases, ∆PB peaks at 91.0 to 91.4% (see the magnifier at the top). The curves cluster together because of the small span in specificity (see the left column of the inset table). The magnifier at 25% ∆PB shows that the relative order within the cluster is the same as the ranking by specificity in the inset table.
The righthand columns of the inset table in Figure 2 list actual PBs and theoretical predictions. For the collaborative study, the gain in PB obtained with the first repeated test, 3.09%, approximated that predicted, 3.37%. Upon testing twice, the gain in PB of 3.76% was only 37.1% of the 10.13% predicted. There is no clear explanation for the meager improvement.
The PBs for the second and third tests, 10.46% and 14.22%, respectively, lagged behind the theoretical predictions of 10.74% and 20.59%, respectively. The two red boxes show where the repetition points lie on the red ∆PB curve and explain the progression of PBs. The arrows point to the coordinates of ∆PB (y axis) and sensitivity (x axis).
Looking back at Figure 1, we see that for RFO = 5%, the median of home molecular diagnostic LAMP tests (HMDx, purple curve) performs better with just one test than three serial RAgTs and beats WHO performance by positioning itself between the WHO and tier 2 RFO curves. In general, the plot of ∆PB versus sensitivity in Figure 2 reveals that when one tolerates 1 in 20 missed diagnoses, repeating a test will not increase the PB maximally unless the sensitivity is 91.03–91.41%.
In Figure 2, the curves cluster together (see magnifiers) in the right-skewed peak shape because specificity is uniformly high (95–99.2%). The rate of gain in ∆PB depends primarily on sensitivity (x axis) and follows the slope of the curve cluster. The slope is highest from about 75–85%, which implies test performance has the most to gain there.

4. Discussion

Clinical evaluations show that the specificity of COVID-19 RAgTs is high [1,2,3,4,5,6,7,8,9,10,11,18,19]. In Figure 2, the ∆PB curves cluster together because the range of specificity (95–99.2%) is narrow. Therefore, the degree to which a repeated RAgT increases the PB depends primarily on the test sensitivity. This mathematical analysis is not exclusive to COVID-19 testing. It applies to other positive/negative qualitative diagnostic tests for infectious diseases and can help optimize future assay designs.
Investigators have addressed the sensitivity of RAgTs in various settings. In hospitalized patients, Kweon et al. [23] found that for RT-PCR cycle thresholds of 25–30, point-of-care antigen test sensitivity ranged from 34.0% to 64.4%, with higher sensitivity within the first week. Hirotsu et al. [24] reported that antigen testing exhibited 55.2% sensitivity and 99.6% specificity in 82 nasopharyngeal specimens from seven hospitalized patients tested serially. Veroniki et al. [25] documented sensitivity of 55% in studies of asymptomatic subjects. Gallardo-Alfaro et al. [26] found RAgT sensitivity of 50% in asymptomatic children.
In twenty community clinical evaluations of asymptomatic subjects, RAgT sensitivity ranged from 37% to 88% (median 55.75%), and specificity ranged from 97.8% to 100% (median 99.70%) [19]. During a nursing home outbreak, Mckay et al. [27] documented a RAgT sensitivity of 52% with asymptomatic patients. For home RAg testing, Chen et al. [28] reported a negative predictive rate of 38.7% in children. With daily testing, Winnnett et al. [29] observed a clinical sensitivity of 44% and concluded that RAgTs miss infected and presumably infectious people.
In correctional facilities, Lind et al. [30] showed that serial RAgTs had higher but diminishingly improved sensitivities over time, similar to the diminishing returns seen with repeat testing in the collaborative study. In a university setting, Smith et al. [31] found that serial testing multiple times per week increased the sensitivity of RAgTs. Wide variations in sensitivity in these studies and others indicate that for RAgTs to rule out disease, performance should be improved and more consistent with less uncertainty [13].
Asymptomatic infections highlight the need to moderate false negatives, that is, curtail missed diagnoses and assure that repeating RAgTs shifts PBs to the right to mitigate the spread of disease. Soni et al. [32] showed that 31.3% were asymptomatic in a clinical study of serial RAgTs. When comparing RAg to RT-PCR positive test results, Sabat et al. [33] found that 59.5% were asymptomatic. In fifteen studies reviewed, Gao et al. [34] concluded that asymptomatic patients had a significantly lower (27.1%) positivity rate than symptomatic patients (68.1%) on day five.
The schematic in Figure 3 illustrates how missed diagnoses might trigger dysfunctional outcomes depending on the increase in local prevalence, the timing of testing, and the pattern of infectivity. Starting in the top left, highly specific tests may generate false positives when prevalence is very low (e.g., <2%) [1]. For graphs of false positive to true positive ratios versus prevalence, please see Figure 1 in reference [1].
Patients with false-positive COVID-19 test results will generally be isolated (upper left, Figure 3) and cannot spread disease because they are not infected with SARS-CoV-2. The prevalence in the collaborative study was in the range of 2.39 to 2.75% (134/5609 to 154/5609) in late 2021 and early 2022 when data were collected [15]. The singleton RT-PCR positives reported by the investigators may have been false-positive RT-PCR reference test results; to avoid bias in the present study, singletons were not excluded.
As prevalence increases, the weighting of RAgT performance shifts from specificity to sensitivity (top sequences in Figure 3). A vicious cycle may develop as diagnoses are missed. Repeating low-sensitivity RAgTs does not advance PBs substantially (see Figure 2). False negatives will increase exponentially (see Figure 1) as prevalence hits double digits. Pollán et al. [35] reported seroprevalence > 10% in Madrid in 2020. Gomez-Ochoa et al. [36] reported healthcare worker prevalence of 11% with 40% asymptomatic.
In 2020, Kalish et al. [37] documented 4.8 undiagnosed infections for every case of COVID-19 in the United States. The 2022 meta-analysis of Dzinamarira et al. [38] found 11% prevalence of COVID-19 among healthcare workers. In a 2021 meta-analysis, Ma et al. [39] discovered that asymptomatic infections were common among COVID-19 confirmed cases, specifically 40.5% overall, 47.5% in nursing home residents or staff, 52.9% in air or cruise travelers, and 54.1% in pregnant women.
Prevalence can be estimated from positivity rates using Equation (30) when high-sensitivity RT-PR testing is used. For example, if the positivity rate is 5%, sensitivity is 100% and specificity is 99% (Tier 3), estimated prevalence will be ~4%, and if the positivity rate is 20% and the specificity is 97.5% with 100% sensitivity, then estimated prevalence will be~18%. Cox-Ganser et al. [40] documented test positivity percentages of up to 28.6% in high-risk occupations. In 2020, the median New York City positivity was 43.6% (range 38–48.1 across zip codes) [41]; estimated prevalence would be 43.0%.
Thus, RAgTs and other COVID-19 diagnostic tests must perform well over wide ranges of contagion that varies geographically, in time, and biologically. For example, Golden et al. [42] showed that antigen concentrations are related to viral load; the limit of detection predicts test performance. Higher-sensitivity point-of-care molecular diagnostics (left in Figure 3), such as LAMP assays [22] with EUAs for home testing or other portable molecular diagnostics, offer a way out of the vicious cycle. Exiting the vicious cycle with highly sensitive and highly specific molecular testing will decrease community risk and enhance resilience [43], including now, as new waves of COVID-19 appear.
The Eris variant, EG.5 (a descendent lineage of XBB.1.9.2), and new BA.2.86 currently threaten well-being, especially that of the elderly. Time spent testing is important. Delaying diagnosis increases the risk of infecting close contacts (see the inner feedback loop in Figure 3). Asymptomatic people carrying SARS-CoV-2 may unknowingly spread the disease to family, friends, workers, and patients as viral loads increase during the protracted three-test, 5-day protocol now mandated by the US FDA for RAgTs. Delays allow new variants to emerge, which in turn increase prevalence.
The US FDA now requires RAgT labeling to state that results are “presumptive”. RT-PCR or other COVID-19 molecular diagnostic tests should be used to confirm negative RAgT results. The WHO and the US declared an end to the pandemic, but people still need to test [44,45]. For the week ending 29 July, 9056 new US hospitalizations were reported, ER cases doubled, and the positivity rate rose to 8.9% for tests reported to the CDC [46]. By September, the positivity rate was over 16% in some regions of the United States [47].
There are limitations to this work. First, Bayesian theory was not proven during the pandemic, although it appears to adequately explain testing phenomena. Second, self-testing in the collaborative study was not controlled, the reference test comparison was incomplete, home QC was omitted, and reagents may have degraded. Third, the layperson testing technique may have been faulty or inconsistent. Fourth, manufacture PPA and NPA specifications may have been overstated in the small studies submitted to the FDA to obtain EUAs.
Further, there was no comparison LAMP molecular assay included in the collaborative study for parallel self-testing at points of care. Nonetheless, these limitations do not obviate the need for higher performance standards and the upgrading of RAgT and other diagnostic assays that will be needed for future surges and threats. Timely diagnosis of COVID-19 is important, especially for children this fall. Mellou et al. [48] found that 36% of children who self-tested were asymptomatic, the median lag to testing positivity was two days, and early diagnosis “…probably decreased transmission of the virus…”.

5. Conclusions and Recommendations

Speed and convenience are two of the primary reasons people seek COVID-19 self-testing [18,22]. Repeating RAgTs three times over five days defeats the purpose of rapid point-of-care testing, does not inform public health in a timely manner, could complicate contact tracing, and may not be cost-effective.
Missed diagnoses can perpetuate virus transmission, exponentially more so when prevalence exceeds PBs. Tolerances limits for missed diagnoses have not been established, nor have they been tied to different levels of prevalence. The ∆PB [Equation (26)] does not depend on prevalence and should be optimized if tests are repeated by using those with very high sensitivity (i.e., tier 2 or tier 3).
No precise temporal trend maps of COVID-19 prevalence in different countries are available for comparison, so the impact of prevalence, per se, is uncertain, although prevalence is known to have been very high in COVID-19 hotspots and high-risk settings [49]. Breaches of RAgT PBs may have generated vicious cycles, adversely transformed outbreaks into endemic disease, prolonged contagion, defeated mitigation, allowed new variants to arise, and fueled the pandemic, as Figure 3 and the prevalence boundary hypothesis [43] suggest.
The FDA allowed manufacturers to support RAgT serial screening claims with new clinical evaluations [20]. Upgraded performance should be demonstrated in multicenter trials with large numbers of diverse subjects. To decrease missed diagnoses with a repeated test, mathematical analysis suggests that RAgT sensitivity should be 91.03 to 91.41% in actual clinical evaluations. The theory also shows that a test with a tier 2 clinical sensitivity of 95% will generate PB of 95.2% when only repeated once (see Table 2).
Use of RAgTs for COVID-19 or future highly infectious disease threats should be evidence-based [49]. COVID-19 was shown to have positivity rates and/or prevalence as high as 75% in California and Ohio prisons and in emergency rooms in Brooklyn, New York [50], which creates high potential for asymptomatic infections to spread silently. If superior RAgT performance is not attainable, the FDA should retire EUAs. Rapid antigen tests should achieve performance levels proven clinically to be at least tier 2 (95% sensitivity, 97.5% specificity), especially in high-risk settings and infectious disease hotspots.

Funding

This research was supported in part by an Edward A. Dickson Emeritus Professorship, by the Point-of-Care Testing Center for Teaching and Research (POCT•CTR), and by Gerald J. Kost, its Director, who held a US Fulbright Scholar Award, ASEAN Program, during the development of the project and derivation of the mathematical equations.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available upon reasonable request from the author.

Acknowledgments

The author thanks the creative students, research assistants, and faculty colleagues who participated in the POCT•CTR COVID-19 research, lecturing, and outreach program during the pandemic. Figures and tables are provided courtesy and permission of Knowledge Optimization, Davis, California.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Kost, G.J. Designing and interpreting COVID-19 diagnostics: Mathematics, visual logistics, and low prevalence. Arch. Pathol. Lab. Med. 2021, 145, 291–307. [Google Scholar] [CrossRef]
  2. Alhamid, G.; Tombuloglu, H.; Al-Suhaimi, E. Development of loop-mediated isothermal amplification (LAMP) assays using five primers reduces the false-positive rate in COVID-19 diagnosis. Sci. Rep. 2023, 13, 5066. [Google Scholar] [CrossRef]
  3. Posteraro, P.; Errico, F.M.; De Carolis, A.; Menchinelli, G.; Sanguinetti, M.; Posteraro, B. Setting-specific variability of false-positive result rates with rapid testing for SARS-CoV-2 antigen. J. Clin. Virol. 2022, 149, 105132. [Google Scholar] [CrossRef] [PubMed]
  4. Wertenauer, C.; Michael, G.B.; Dressel, A.; Pfeifer, C.; Hauser, U.; Wieland, E.; Mayer, C.; Mutschmann, C.; Roskos, M.; Wertenauer, H.-J.; et al. Diagnostic performance of rapid antigen testing for SARS-CoV-2: The COVid-19 AntiGen (COVAG) study. Front. Med. 2022, 9, 774550. [Google Scholar] [CrossRef]
  5. Yang, Y.P.; Huang, L.L.; Pan, S.J.; Xu, D.; Jiesisibieke, Z.L.; Tung, T.H. False-positivity results in rapid antigen tests for SARS-CoV-2: An umbrella review of meta-analyses and systematic reviews. Expert Rev. Anti-Infect. Ther. 2022, 20, 1005–1013. [Google Scholar] [CrossRef]
  6. Yusuf, E.; Virginia-Cova, L.; Provacia, L.B.; Koeijers, J.; Brown, V. The importance of disease prevalence in clinical decision making: A real practice study on COVID-19 antigen test in Curacao. Braz. J. Infect. Dis. 2022, 26, 102389. [Google Scholar] [CrossRef] [PubMed]
  7. Caruana, G.; Lebrun, L.L.; Aebischer, O.; Opota, O.; Urbano, L.; de Rham, M.; Marchetti, O.; Greub, G. The dark side of SARS-CoV-2 rapid antigen testing: Screening asymptomatic patients. New Microbes New Infect. 2021, 42, 100899. [Google Scholar] [CrossRef]
  8. Healy, B.; Khan, A.; Metezai, H.; Blyth, I.; Asad, H. The impact of false positive COVID-19 results in an area of low prevalence. Clin. Med. 2021, 21, e54–e56. [Google Scholar] [CrossRef] [PubMed]
  9. Ladhani, S.N.; Chow, J.Y.; Atkin, S.; Brown, K.E.; Ramsay, M.E.; Randell, P.; Sanderson, F.; Junghans, C.; Sendall, K.; Downes, R.; et al. Regular mass screening for SARS-CoV-2 infection in care homes already affected by COVID-19 outbreaks: Implications of false positive test results. J. Infect. 2021, 82, 282–327. [Google Scholar] [CrossRef] [PubMed]
  10. Mouliou, D.S.; Gourgoulianis, K.I. False-positive and false-negative COVID-19 cases: Respiratory prevention and management strategies, vaccination, and further perspectives. Expert Rev. Respir. Med. 2021, 15, 993–1002. [Google Scholar] [CrossRef]
  11. Basile, K.; Maddocks, S.; Kok, J.; Dwyer, D.E. Accuracy amidst ambiguity: False positive SARS-CoV-2 nucleic acid tests when COVID-19 prevalence is low. Pathology 2020, 52, 809–811. [Google Scholar] [CrossRef]
  12. Hayden, M.K.; Hanson, K.E.; Englund, J.A.; Lee, F.; Lee, M.J.; Loeb, M.; Morgan, D.J.; Patel, R.; El Alayli, A.; El Mikati, I.K.; et al. The Infectious Diseases Society of America Guidelines on the Diagnosis of COVID-19: Antigen Testing. Clin. Infect. Dis. 2023, ciad032. [Google Scholar] [CrossRef]
  13. Kost, G.J. The impact of increasing prevalence, false omissions, and diagnostic uncertainty on Coronavirus Disease 2019 (COVID-19) test performance. Arch. Pathol. Lab. Med. 2021, 145, 797–813. [Google Scholar] [CrossRef] [PubMed]
  14. Kost, G.J. Diagnostic strategies for endemic Coronavirus Disease 2019 (COVID-19) Rapid antigen tests, repeated testing, and prevalence boundaries. Arch. Pathol. Lab. Med. 2022, 146, 16–25. [Google Scholar] [CrossRef]
  15. Soni, A.; Herbert, C.; Lin, H.; Yan, Y.; Pretz, C.; Stamegna, P. Performance of rapid antigen tests to detect symptomatic and asymptomatic SARS-CoV-2 infection: Findings from the Test Us at Home prospective cohort study (August 22, 2022, updated). medRxiv 2023, 1–25. [Google Scholar] [CrossRef]
  16. Soni, A.; Herbert, C.; Lin, H.; Yan, Y.; Pretz, C.; Stamegna, P.; Wang, B.; Orwig, T.; Wright, C.; Tarrant, S.; et al. Performance of rapid antigen tests to detect symptomatic and asymptomatic SARS-CoV-2 Infection: A prospective cohort study. Ann. Intern. Med. 2023, 176, 975–982. [Google Scholar] [CrossRef]
  17. Food and Drug Administration. In Vitro Diagnostics EUAs—Antigen Diagnostic Tests for SARS-CoV-2. Individual EUAs for Antigen Diagnostic Tests for SARS-CoV-2. 2023. Available online: https://www.fda.gov/medical-devices/coronavirus-disease-2019-covid-19-emergency-use-authorizations-medical-devices/in-vitro-diagnostics-euas-antigen-diagnostic-tests-sars-cov-2 (accessed on 7 October 2023).
  18. Kost, G.J. The Coronavirus disease 2019 spatial care path: Home, community, and emergency diagnostic portals. Diagnostics 2022, 12, 1216. [Google Scholar] [CrossRef] [PubMed]
  19. Kost, G.J. Table S1. COVID-19 Tests with FDA Emergency Use Authorization for Home Self-testing. Part I. Antigen Tests (pp. 1-2), and Table S2. COVID-19 Rapid Antigen Tests for Symptomatic and Asymptomatic Subjects in Community Settings. Part I. Point-of-care Testing (pp. 3–7). Diagnostics 2022, 12, 1216, Supplementary Materials. Available online: https://www.mdpi.com/article/10.3390/diagnostics12051216/s1 (accessed on 7 October 2023).
  20. Food and Drug Administration. Revisions Related to Serial (Repeat) Testing for the EUAs of Antigen IVDs. 1 November 2022. Available online: https://www.fda.gov/media/162799/download (accessed on 7 October 2023).
  21. World Health Organization. Antigen-Detection in the Diagnosis of SARS-CoV-2 Infection. Interim Guidance. 6 October 2021. Available online: https://www.who.int/publications/i/item/antigen-detection-in-the-diagnosis-of-sars-cov-2infection-using-rapid-immunoassays (accessed on 7 October 2023).
  22. Kost, G.J. Changing diagnostic culture calls for point-of-care preparedness—Multiplex now, open prevalence boundaries, and build community resistance. 21st Century Pathol. 2022, 2, 1–7. [Google Scholar]
  23. Kweon, O.J.; Lim, Y.K.; Kim, H.R.; Choi, Y.; Kim, M.-C.; Choi, S.-H.; Chung, J.-W.; Lee, M.-K. Evaluation of rapid SARS-CoV-2 antigen tests, AFIAS COVID-19 Ag and ichroma COVID-19 Ag, with serial nasopharyngeal specimens from COVID-19 patients. PLoS ONE 2021, 16, e0249972. [Google Scholar] [CrossRef]
  24. Hirotsu, Y.; Maejima, M.; Shibusawa, M.; Nagakubo, Y.; Hosaka, K.; Amemiya, K.; Sueki, H.; Hayakawa, M.; Mochizuki, H.; Tsutsui, T.; et al. Comparison of automated SARS-CoV-2 antigen test for COVID-19 infection with quantitative RT-PCR using 313 nasopharyngeal swabs, including from seven serially followed patients. Int. J. Infect. Dis. 2020, 99, 397–402. [Google Scholar] [CrossRef] [PubMed]
  25. Veroniki, A.A.; Tricco, A.C.; Watt, J.; Tsokani, S.; Khan, P.A.; Soobiah, C.; Negm, A.; Doherty-Kirby, A.; Taylor, P.; Lunny, C.; et al. Rapid antigen-based and rapid molecular tests for the detection of SARS-CoV-2: A rapid review with network meta-analysis of diagnostic test accuracy studies. BMC Med. 2023, 21, 110. [Google Scholar] [CrossRef]
  26. Gallardo-Alfaro, L.; Lorente-Montalvo, P.; Cañellas, M.; Carandell, E.; Oliver, A.; Rojo, E.; Riera, B.; Llobera, J.; Bulilete, O. Diagnostic accuracy of Panbio™ rapid antigen test for SARS-CoV-2 in paediatric population. BMC Peds. 2023, 23, 433. [Google Scholar] [CrossRef]
  27. McKay, S.L.; Tobolowsky, F.A.; Moritz, E.D.; Hatfield, K.M.; Bhatnagar, A.; LaVoie, S.P.; Jackson, D.A.; Lecy, K.D.; Bryant-Genevier, J.; Campbell, D.; et al. Performance evaluation of serial SARS-CoV-2 rapid antigen testing during a nursing home outbreak. Ann. Intern. Med. 2021, 174, 945–951. [Google Scholar] [CrossRef]
  28. Chen, S.H.; Wu, J.L.; Liu, Y.C.; Yen, T.Y.; Lu, C.Y.; Chang, L.Y.; Lee, W.T.; Chen, J.M.; Lee, P.I.; Huang, L.M. Differential clinical characteristics and performance of home antigen tests between parents and children after household transmission of SARS-CoV-2 during the Omicron variant pandemic. Int. J. Infect. Dis. 2023, 128, 301–306. [Google Scholar] [CrossRef]
  29. Viloria Winnett, A.; Akana, R.; Shelby, N.; Davich, H.; Caldera, S.; Yamada, T.; Reyna, J.R.B.; Romano, A.E.; Carter, A.M.; Kim, M.K.; et al. Daily SARS-CoV-2 nasal antigen tests miss infected and presumably infectious people due to viral load differences among specimen types. Microbiol. Spectr. 2023, 11, e01295-23. [Google Scholar] [CrossRef] [PubMed]
  30. Lind, M.L.; Schultes, O.L.; Robertson, A.J.; Houde, A.J.; Cummings, D.A.; Ko, A.I.; Kennedy, B.S.; Richeson, R.P. Testing frequency matters: An evaluation of the diagnostic performance of a severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) rapid antigen test in US correctional facilities. Clin. Infect. Dis. 2023, 76, e327–e335. [Google Scholar] [CrossRef] [PubMed]
  31. Smith, R.L.; Gibson, L.L.; Martinez, P.P.; Ke, R.; Mirza, A.; Conte, M.; Gallagher, N.; Conte, A.; Wang, L.; Fredrickson, R.; et al. Longitudinal assessment of diagnostic test performance over the course of acute SARS-CoV-2 infection. J. Infect. Dis. 2021, 224, 976–982. [Google Scholar] [CrossRef]
  32. Soni, A.; Herbert, C.; Pretz, C.; Stamegna, P.; Filippaios, A.; Shi, Q.; Suvarna, T.; Harman, E.; Schrader, S.; Nowak, C.; et al. Design and implementation of a digital site-less clinical study of serial rapid antigen testing to identify asymptomatic SARS-CoV-2 infection. J. Clin. Transl. Sci. 2023, 7, e120. [Google Scholar] [CrossRef]
  33. Sabat, J.; Subhadra, S.; Rath, S.; Ho, L.M.; Satpathy, T.; Pattnaik, D.; Pati, S.; Turuk, J. A comparison of SARS-CoV-2 rapid antigen testing with realtime RT-PCR among symptomatic and asymptomatic individuals. BMC Infect. Dis. 2023, 23, 87. [Google Scholar] [CrossRef]
  34. Gao, Y.; Zhao, Y.; Zhang, X.; Tian, J.; Guyatt, G.; Hao, Q. Comparing SARS-CoV-2 testing positivity rates and COVID-19 impact among different isolation strategies: A rapid systematic review and a modelling study. Eclinicalmedicine 2023, 61, 102058. [Google Scholar] [CrossRef] [PubMed]
  35. Pollán, M.; Pérez-Gómez, B.; Pastor-Barriuso, R.; Oteo, J.; Hernán, M.A.; Pérez-Olmeda, M.; Sanmartín, J.L.; Fernández-García, A.; Cruz, I.; de Larrea, N.F.; et al. Prevalence of SARS-CoV-2 in Spain (ENE-COVID): A nationwide, population-based seroepidemiological study. Lancet 2020, 396, 535–544. [Google Scholar] [CrossRef] [PubMed]
  36. Gómez-Ochoa, S.A.; Franco, O.H.; Rojas, L.Z.; Raguindin, P.F.; Roa-Díaz, Z.M.; Wyssmann, B.M.; Guevara, S.L.R.; Echeverría, L.E.; Glisic, M.; Muka, T. COVID-19 in health-care workers: A living systematic review and meta-analysis of prevalence, risk factors, clinical characteristics, and outcomes. Am. J. Epidemiol. 2021, 190, 161–175. [Google Scholar] [CrossRef] [PubMed]
  37. Kalish, H.; Klumpp-Thomas, C.; Hunsberger, S.; Baus, H.A.; Fay, M.P.; Siripong, N.; Wang, J.; Hicks, J.; Mehalko, J.; Travers, J.; et al. Undiagnosed SARS-CoV-2 seropositivity during the first 6 months of the COVID-19 pandemic in the United States. Sci. Trans. Med. 2021, 13, eabh39826. [Google Scholar] [CrossRef] [PubMed]
  38. Dzinamarira, T.; Murewanhema, G.; Mhango, M.; Iradukunda, P.G.; Chitungo, I.; Mashora, M.; Makanda, P.; Atwine, J.; Chimene, M.; Mbunge, E.; et al. COVID-19 prevalence among healthcare workers. A systematic review and meta-analysis. Int. J. Environ. Res. Public Health 2022, 19, 146. [Google Scholar] [CrossRef]
  39. Ma, Q.; Liu, J.; Liu, Q.; Kang, L.; Liu, R.; Jing, W.; Wu, Y.; Liu, M. Global percentage of asymptomatic infections among the tested population and individuals with confirmed COVID-18 diagnosis. A systematic review and meta-analysis. JAMA Netw. Open 2021, 4, e2137257. [Google Scholar] [CrossRef]
  40. Cox-Ganser, J.M.; Henneberger, P.K.; Weissman, D.N.; Guthrie, G.; Groth, C.P. COVID-19 test positivity by occupation using the Delphi US COVID-19 trends and impact survey, September-November 2020. Am. J. Ind. Med. 2022, 65, 721–730. [Google Scholar] [CrossRef]
  41. Lamb, M.R.; Kandula, S.; Shaman, J. Differential COVID-19 case positivity in New York City neighborhoods: Socioeconomic factors and mobility. Influenza Other Respir. Viruses 2021, 15, 209–217. [Google Scholar] [CrossRef] [PubMed]
  42. Golden, A.; Oliveira-Silva, M.; Slater, H.; Vieira, A.M.; Bansil, P.; Gerth-Guyette, E.; Leader, B.T.; Zobrist, S.; Braga Ferreira, A.K.; Santos de Araujo, E.C.; et al. Antigen concentration, viral load, and test performance for SARS-CoV-2 in multiple specimen types. PLoS ONE 2023, 18, e0287814. [Google Scholar] [CrossRef]
  43. Kost, G.J. Home, Community, and Emergency Spatial Care Paths—Diagnostic Portals for COVID-19, Critical Care, and Superstorms (and the Prevalence Boundary Hypothesis). In Proceedings of the IFCC Live Webinar on POCT: Developing Community Resilience, Online, 18 January 2023. [Google Scholar]
  44. World Health Organization. EG.5 Initial Risk Evaluation; WHO: Geneva, Switzerland, 2023; Available online: https://www.who.int/docs/default-source/coronaviruse/09082023eg.5_ire_final.pdf?sfvrsn=2aa2daee_1 (accessed on 7 October 2023).
  45. Abbott, B.; Kamp, J.; Hopkins, J. Omicron subvariant “Eris” drives rise in Covid infections. Wall Str. J. 2023, 282, A3. [Google Scholar]
  46. Grant, K.; McNamara, D. It May Be Time to Pay Attention to COVID Again. WebMD Health News, 11 August 2023. Available online: https://www.webmd.com/covid/news/20230810/it-may-be-time-to-pay-attention-to-covid-again (accessed on 7 October 2023).
  47. CDC. United States COVID-19 Hospitalizations, Deaths, Emergency Department (ED) Visits, and Test Positivity by Geographic Area. Available online: https://covid.cdc.gov/covid-data-tracker/#maps_positivity-week (accessed on 7 October 2023).
  48. Mellou, K.; Sapounas, S.; Panagoulias, I.; Gkova, M.; Papadima, K.; Andreopoulou, A.; Kalotychou, D.; Chatzopoulos, M.; Gkolfinopoulou, K.; Papaevangelou, V.; et al. Time lag between COVID-19 diagnosis and symptoms onset for different population groups: Evidence that self-testing in schools was associated with timely diagnosis among children. Life 2022, 12, 1305. [Google Scholar] [CrossRef] [PubMed]
  49. Stokes, W.; Berenger, B.M.; Venner, A.A.; Deslandes, V.; Shaw, J.L.V. Point of care molecular and antigen detection tests for COVID-19: Current status and future prospects. Expert Rev. Mol. Diag. 2022, 22, 797–809. [Google Scholar] [CrossRef] [PubMed]
  50. Kost, G.J. Moderate (20–70%) and High (70–100%) COVID-19 Positivity Rates and Prevalence in Different Geographic Regions. Arch. Pathol. Lab. Med. 2021, 145, 797–813, Supplemental Digital Content. Available online: https://meridian.allenpress.com/aplm/article/145/7/797/462534/The-Impact-of-Increasing-Disease-Prevalence-False (accessed on 7 October 2023). [CrossRef] [PubMed]
Figure 1. False Omission Rates Increase Exponentially with Prevalence. The median performance of a home molecular diagnostic test (HMDx LAMP, purple curve) performed only once beats that for three serial RAgTs in the collaborative study. A repeated tier 2 test (green curve rising on the right) will not miss more than 1 in 500 diagnoses until the prevalence exceeds 43.8%, then 1 in 200 up to 65.9% prevalence, and subsequently 1 in 100 up to 79.4%, 1 in 50 up to 88.6%, and 1 in 20 (large green dot) up to 95.24%. Abbreviations: HMDx, home molecular diagnostic; LAMP, loop-mediated isothermal amplification; NPA, negative percent agreement; PB, prevalence boundary; RAgT, rapid antigen test; RFO, rate of false omissions; and WHO, World Health Organization.
Figure 1. False Omission Rates Increase Exponentially with Prevalence. The median performance of a home molecular diagnostic test (HMDx LAMP, purple curve) performed only once beats that for three serial RAgTs in the collaborative study. A repeated tier 2 test (green curve rising on the right) will not miss more than 1 in 500 diagnoses until the prevalence exceeds 43.8%, then 1 in 200 up to 65.9% prevalence, and subsequently 1 in 100 up to 79.4%, 1 in 50 up to 88.6%, and 1 in 20 (large green dot) up to 95.24%. Abbreviations: HMDx, home molecular diagnostic; LAMP, loop-mediated isothermal amplification; NPA, negative percent agreement; PB, prevalence boundary; RAgT, rapid antigen test; RFO, rate of false omissions; and WHO, World Health Organization.
Diagnostics 13 03223 g001
Figure 2. Gain in Prevalence Boundary as a Function of Test Sensitivity. This figure illustrates three key findings: (1) The curves (color coded to the inset table) cluster together because of the narrow range in clinical specificity (95% to 99.2%), which means that the primary driver of the increase in prevalence boundary (∆PB) is sensitivity; (2) the shallow shape of the curves on the left emphasizes how little is gained by repeating RAgTs tests that start with very low sensitivity; and (3) when sensitivity is 91.0–91.4%, a repeated test will maximally increase the prevalence boundary as shown by the peaks on the right, making the higher performance tests more useful in settings of different prevalence because missed diagnoses are minimized. Please see the inset table for performance metrics. The curves were created using Equation (26). Abbreviations: ∆PB, the increase in PB with repeated testing; PB, prevalence boundary; RAgT, rapid antigen test; RFO, rate of false omissions; T1, tier 1, T2, tier 2; and WHO, World Health Organization.
Figure 2. Gain in Prevalence Boundary as a Function of Test Sensitivity. This figure illustrates three key findings: (1) The curves (color coded to the inset table) cluster together because of the narrow range in clinical specificity (95% to 99.2%), which means that the primary driver of the increase in prevalence boundary (∆PB) is sensitivity; (2) the shallow shape of the curves on the left emphasizes how little is gained by repeating RAgTs tests that start with very low sensitivity; and (3) when sensitivity is 91.0–91.4%, a repeated test will maximally increase the prevalence boundary as shown by the peaks on the right, making the higher performance tests more useful in settings of different prevalence because missed diagnoses are minimized. Please see the inset table for performance metrics. The curves were created using Equation (26). Abbreviations: ∆PB, the increase in PB with repeated testing; PB, prevalence boundary; RAgT, rapid antigen test; RFO, rate of false omissions; T1, tier 1, T2, tier 2; and WHO, World Health Organization.
Diagnostics 13 03223 g002
Figure 3. Potential Vicious Cycle Fueled by Repeating Poorly Performing Rapid Antigen Tests. Poorly performing RAgTs can perpetuate virus transmission by missing diagnoses, more so as prevalence increases and the weighting of test performance shifts from specificity (top left) to sensitivity (top right). In high-risk settings and hotspots, prevalence breaches and evolving variants may compound an outbreak to generate an epidemic. Repeating RAgTs consumes valuable time. Asymptomatic people may unknowingly spread disease to family, friends, workers, and clients, thereby creating a vicious cycle. Abbreviation: RAgTs, rapid antigen tests.
Figure 3. Potential Vicious Cycle Fueled by Repeating Poorly Performing Rapid Antigen Tests. Poorly performing RAgTs can perpetuate virus transmission by missing diagnoses, more so as prevalence increases and the weighting of test performance shifts from specificity (top left) to sensitivity (top right). In high-risk settings and hotspots, prevalence breaches and evolving variants may compound an outbreak to generate an epidemic. Repeating RAgTs consumes valuable time. Asymptomatic people may unknowingly spread disease to family, friends, workers, and clients, thereby creating a vicious cycle. Abbreviation: RAgTs, rapid antigen tests.
Diagnostics 13 03223 g003
Table 1. Diagnostic Performance Tiers with Systematic Prevalence Boundaries for Repeated Tests.
Table 1. Diagnostic Performance Tiers with Systematic Prevalence Boundaries for Repeated Tests.
TierPerformance LevelSensitivity [%] Specificity [%]Prevalence Boundary for RFO of 5%
1st Test [%]2nd Test [%]∆PB [%]
1Low909533.382.649.3
2Marginal9597.550.695.244.6
3High100≥99No BoundaryNo Boundary
Abbreviations: PB, prevalence boundary; ∆PB, the gain in PB from repeating the test; and RFO, the rate of false omissions (missed diagnoses).
Table 2. Fundamental Definitions, Derived Equations, Ratios, Rates, Predictive Value Geometric Mean-Squared, Prevalence Boundary, Recursion, and Special Cases.
Table 2. Fundamental Definitions, Derived Equations, Ratios, Rates, Predictive Value Geometric Mean-Squared, Prevalence Boundary, Recursion, and Special Cases.
Eq. No.Category and EquationsDep. Var.Indep. Var.
      Fundamental Definitions
(1)x = Sens = TP/(TP + FN)xTP, FN
(2)y = Spec = TN/(TN + FP)yTN, FP
(3)s = PPV = TP/(TP + FP) sTP, FP
(4)t = NPV = TN/(TN + FN)tTN, FN
(5)p = Prev = (TP + FN)/NpTP, FN, N
(6)N = TP + FP + TN + FNNTP, FP, TN, FN
      Derived Equations
(7)PPV = [Sens·Prev]/[Sens·Prev + (1 − Spec)(1 − Prev)], or
s = [xp]/[xp + (1 − y)(1 − p)]—symbolic version of the equation above
sx, y, p
(8)p = [s(y − 1)]/[s(x + y − 1) − x]px, y, s
(9)x = [s(p − 1)(y − 1)]/[p(s − 1)]xy, p, s
(10)y = [sp(x − 1) + s − px]/[s(1 − p)]yx, p, s
(11)NPV = [Spec·(1 − Prev)]/[Prev·(1 − Sens) + Spec·(1 − Prev)], or t = [y(1 − p)]/[p(1 − x) + y(1 − p)]tx, y, p
(12)p = [y(1 − t)]/[t(1 − x − y) + y]px, y, t
(13)x = [pt + y(1 − p)(t − 1)]/[pt]xy, p, t
(14)y = [pt(x − 1)]/[t(1 − p) − 1 + p]yx, p, t
      Ratios
(15)TP/FP = PPV/(1 − PPV) = [Sens·Prev]/[(1 − Spec)(1 − Prev)], or [xp]/[(1 − y)(1 − p)] TP/FP Ratiox, y, p
(16)FP/TP = (1 − PPV)/PPV = [(1 − y)(1 − p)]/(xp)FP/TP Ratiox, y, p
(17)FN/TN = (1 − NPV)/NPV = [p(1 − x)]/[y(1 − p)]FN/TN Ratiox, y, p
      Rates
True positive (RTP), false positive (RFP), and positive (RPOS)
(18)RTP = TP/(TP + FN) = xRTPTP, FN
(19)RFP = FP/(TN + FP) = 1 − Spec = 1 − yRFPTN, FP
(20)RPOS = (TP + FP)/NRPOSTP, FP, N
False Omission (RFO)
(21)RFO = FN/(TN + FN) = 1 − NPV = 1 − t = [p(1 − x)]/[p(1 − x) + y(1 − p)]RFOx, y, p
RFO with repeated test (rt)
(22)RFO/rt = [p(1 − x)2]/[p(1 − x)2 + y2(1 − p)]RFO/rtx, y, p
      Predictive value geometric mean-squared (range 0 to 1)
(23)PV GM2 = PPV·NPV = s·t = {[xp]/[xp + (1 − y)(1 − p)]}· {[y(1 − p)]/[p(1 − x) + y(1 − p)]}PV GM2x, y, p
      Prevalence Boundary
Prevalence boundary for one test given RFO
(24)PB = y(1 − t)/[(1 − x) − (1 − t)(1 – x − y)] = [yRFO]/[(1 − x) − RFO(1 – x − y)] = [yRFO]/[RFO(x + y − 1) + (1 − x)]PBx, y, t or
x, y, RFO
Prevalence boundary for repeated test (PBrt) given RFO
(25)PBrt = [y2RFO]/[RFO(y2 − x2+2x − 1) + (x − 1)2]PBrtx, y, RFO
Improvement in prevalence boundary (∆PB) when test second time given RFO
(26)∆PB = {y2RFO/[RFO(y2 − x2 + 2x − 1) + (x − 1)2]} − {yRFO/[RFO/[(x + y − 1) + (1 − x)]}∆PBx, y, RFO
      Recursion
Recursive formulae for PPV (si+1) and NPV (ti+1)
(27)si+1 = [xpi]/[xpi + (1 − y)(1 − pi)], where the index, i = 1, 2, 3…si+1x, y, pi
(28)ti+1 = [y(1 − pi)]/[pi(1 − x) + y(1 − pi)]ti+1x, y, pi
      Special Cases
PPV when sensitivity is 100%
(29)PPV = [Prev]/[Prev + (1 − Spec)·(1 − Prev)], or
s = [p]/[p + (1 − y)(1 − p)]
sy, p
Prevalence when sensitivity is 100% (i.e., FN = 0)
(30)Prev = 1 − [(1 − N+/N)/Spec], or p = 1 − [(1 − POS%)/y]pPOS%, y
Sensitivity when given specificity, RFO, and PB (no repeat)
(31)x = [PB − RFO(y + PB − y·PB)]/[PB(1 − RFO)]xy, RFO, PB
Sensitivity, given RFO and PB, when specificity (y) is 100%
(32)x = (PB − RFO)/[PB(1 − RFO)]xRFO, PB
Accuracy (not recommended—see note)
(33)A = (TP + TN)/N = Sens·Prev(dz) + Spec·Prev(no dz)ATP, TN, N
Abbreviations: Dep. Var., dependent variable; Eq., equation; FN, false negative; FP, false positive; i, an index from 1 to 3 or more (the number of testing events); Indep. Var., independent variable(s); N, total number of people tested; N+, number of positives (TP + FP) in the tested population; NPV, negative predictive value (t); pi+1, pi, indexed partition prevalence in the recursive formula for PPV and NPV; PB, prevalence boundary; PBrt, prevalence boundary for repeated test; ∆PB, improvement in prevalence boundary; POS%, (N+/N), percent positive of the total number tested (same as RPOS); PPV, positive predictive value (s); Prev, prevalence (p); Prev(dz), same as p; Prev(no dz), prevalence of no disease; PV GM2, square of the geometric mean of positive and negative predictive values, (PPV·NPV), expressed as a fraction from 0 to 1; RFO, the rate of false omissions; RFO/rt, rate of false omission with repeated test (rt); RFP, false positive rate, aka false positive alarm (probability that a false alarm will be raised or that a false result will be reported when the true value is negative); RPOS, positivity rate; RTP, true positive rate, the same as sensitivity; Sens, sensitivity (x); Spec, specificity (y); TN, true negative; and TP, true positive. Notes: Sens, Spec, PPV, NPV, and Prev are expressed as percentages from 1 to 100%, or as decimal fractions from 0 to 1 by dividing by 100%. PV GM2 was created for visual logistics comparisons of performance curves of diagnostic tests, not for point comparisons. If the denominators of derived equations become indeterminate, then revert to the fundamental definitions, Equations (1)–(6). The use of the formula for accuracy [Equation (33)] is not recommended because of the duplicity of values with complementary changes in sensitivity and specificity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kost, G.J. The Impact of Repeating COVID-19 Rapid Antigen Tests on Prevalence Boundary Performance and Missed Diagnoses. Diagnostics 2023, 13, 3223. https://doi.org/10.3390/diagnostics13203223

AMA Style

Kost GJ. The Impact of Repeating COVID-19 Rapid Antigen Tests on Prevalence Boundary Performance and Missed Diagnoses. Diagnostics. 2023; 13(20):3223. https://doi.org/10.3390/diagnostics13203223

Chicago/Turabian Style

Kost, Gerald J. 2023. "The Impact of Repeating COVID-19 Rapid Antigen Tests on Prevalence Boundary Performance and Missed Diagnoses" Diagnostics 13, no. 20: 3223. https://doi.org/10.3390/diagnostics13203223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop