Alternative Devices for Heart Rate Variability Measures: A Comparative Test–Retest Reliability Study
Abstract
:1. Introduction
- What is the comparative operational reliability of the four devices?
- What are the numerical values of the seven HRV measures in the five behavioral conditions as determined by each device?
- Can different devices be used interchangeably? To address this question, it was necessary to determine inter-device agreement by determining the Bland–Altman Limits of Agreement (LOA). This determination is clinically important because it advises the clinician of the possible implications of replacing one device with another in the course of treatment.
- What constitutes a significant change in an HRV measure? Addressing this question required a determination of test–retest reliability across repeated measurements for the seven measures in five conditions for each device. Longitudinal clinical use requires a quantitative assessment of test–retest reliability as quantified by the intraclass correlation coefficient. Determination of the intraclass correlation coefficient requires multiple measurements from a clinically stable population. The requirement to use a clinically stable population motivates the use of a healthy control population in these studies. Intraclass correlation coefficients were then used to calculate the Standard Error of Measurement and the Minimal Detectable Difference (MDD). The Minimal Detectable Difference is the smallest change that could be identified as statistically significant and is therefore critical when HRV measures are used longitudinally to assess treatment response or disease progression. The prior literature assessing HRV test–retest reliability [1,2,3] presents encouraging retest reliability, but these studies do not include the calculation of the clinically important Minimal Detectable Difference. We note, however, that Williams et al. [4] report the related Standard Error of Measurement that can be used to calculate the Minimal Detectable Difference.
2. Materials and Methods
2.1. Sample
2.2. Procedure
2.3. Signal Acquisition
2.4. Signal Analysis
2.5. Limits of Agreement
2.6. Intraclass Correlation Coefficients
2.7. Minimal Detectable Difference
2.8. Statistical Analysis
3. Results
3.1. Operational Reliability
3.2. Numerical Values of HRV Measures
3.3. Inter-Device Agreement: Limits of Agreement (LOA) of Portable Devices with Standard ECG System
3.4. Test–Retest Reliability: Intraclass Correlation Coefficients, Standard Error of Measurement, Minimal Detectable Difference
4. Discussion
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Eikeseth, F.F.; Saetren, S.S.; Benjamin, B.R.; Eikenaes, I.U.-M.; Sutterlin, S.; Hummelen, B. The test-retest reliability of heart rate functioning and its association with personality functioning. Front. Psychiatry 2020, 11, 55814. [Google Scholar] [CrossRef] [PubMed]
- Hoffmann, B.; Flatt, A.A.; Silva LE, V.; Mlynczak, M.; Baranowski, R.; Dziedzic, E.; Werner, B.; Gasior, J.S. A pilot study of the reliability and agreement of heart rate, respiratory rate, and short-term heart rate variability in elite modern pentathlon athletes. Diagnostics 2020, 10, 833. [Google Scholar] [CrossRef] [PubMed]
- Huang, C.-J.; Chan, H.-L.; Chang, Y.-J.; Chen, S.-M.; Hsu, M.-J. Validity of the Polar V800 Monitor for Assessing Heart Rate Variability in Elderly Adults under Mental Stress and Dual Task Conditions. Int. J. Environ. Res. Public Health 2021, 18, 869. [Google Scholar] [CrossRef] [PubMed]
- Williams, D.P.; Jarczok, M.N.; Ellis, R.J.; Hillecke, T.K.; Thayer, J.F.; Koenig, J. Two-week test-retest reliability of the Polar RX800CX to record hart rate variability. Clin. Physiol. Funct. Imaging 2017, 37, 776–781. [Google Scholar] [CrossRef] [PubMed]
- Lu, G.; Yang, F. Limitations of oximetry to measure heart rate variability measures. Cardiovasc. Eng. 2009, 9, 119–125. [Google Scholar] [CrossRef] [PubMed]
- Guzik, P.; Piekos, C.; Pierog, O.; Fenech, N.; Krauze, T.; Piskorski, J.; Wykretowicz, A. Classic electrocardiogram-based and mobile technology derived approaches to heart rate variability are not equivalent. Int. J. Cardiol. 2018, 258, 154–156. [Google Scholar] [CrossRef] [PubMed]
- Vovkanych, L.; Boretsky, Y.; Sokolovsky, V.; Berhtraum, D.; Krass, S. Validity of the software-hardware complex “rhythm” for measuring the rr intervals and heart rate variability at rest. J. Phys. Educ. Sport 2020, 20, 1599–1605. [Google Scholar]
- Correia, B.; Dias, N.; Costa, P.; Pêgo, J.M. Validation of a Wireless Bluetooth Photoplethysmography Sensor Used on the Earlobe for Monitoring Heart Rate Variability Features during a Stress-Inducing Mental Task in Healthy Individuals. Sensors 2020, 20, 3905. [Google Scholar] [CrossRef]
- Dobbs, W.C.; Fedewa, M.V.; MacDonald, H.V.; Holmes, C.J.; Cicone, Z.S.; Plews, D.J.; Esco, M.R. The accuracy of acquiring heart rate variability from portable devices: A systematic review and meta-analysis. Sports Med. 2019, 49, 417–435. [Google Scholar] [CrossRef]
- Gronwall, D.M. Paced auditory serial-addition task: A measure of recovery from concussion. Percept. Motor Skills 1977, 44, 367–373. [Google Scholar] [CrossRef]
- Tarvainen, M.P.; Niskanen, J.-P. Kubios HRV User’s Guide. Available online: http://kubios.uku.fi (accessed on 1 August 2010).
- Billman, G.E. The LF/HF ratio does not accurately measure cardiac sympatho-vagal balance. Front. Physiol. 2013, 4, 26. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Luiz, R.R.; Szklo, M. More than one statistical strategy to assess agreement of quantitative measurements may be usefully reported. J. Clin. Epidemiol. 2005, 58, 215–216. [Google Scholar] [CrossRef]
- Bland, J.M.; Altman, D.G. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310, Republished with corrected equations in Biochimica Clinica. 1987, 11, 399–404. [Google Scholar] [CrossRef]
- Shrout, P.E.; Fleiss, J.L. Intraclass correlations: Uses in assessing rater reliability. Psychol. Bull. 1979, 86, 420–428. [Google Scholar] [CrossRef] [PubMed]
- McGraw, K.O.; Wong, S.P. Forming inferences about some intraclass correlation coefficients. Psychol. Methods 1996, 1, 30–46. [Google Scholar] [CrossRef]
- Müller, R.; Büttner, P. A critical discussion of intraclass correlation coefficients. Stat. Med. 1994, 13, 2465–2476. [Google Scholar] [CrossRef] [PubMed]
- Koo, T.K.; Li, M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Res. 2016, 15, 155–163. [Google Scholar] [CrossRef] [Green Version]
- Fleiss, J.L.; Shrout, P.E. Approximate interval estimation for a certain intraclass correlation coefficient. Psychometrika 1978, 43, 259–262. [Google Scholar] [CrossRef]
- Portney, L.G.; Watkins, M.P. Foundations of Clinical Research. Applications to Practice, 3rd ed.; Prentice Hall Health: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
- Behrens, J.T. Principles and procedures of exploratory data analysis. Psychol. Methods 1997, 2, 131–160. [Google Scholar] [CrossRef]
- Jan, H.-Y.; Chen, M.-F.; Fu, T.-C.; Lin, W.-C.; Tsai, C.-L.; Lin, K.-P. Evaluation of coherence between ECG and PPG derived parameters on heart rate variability and respiration in healthy volunteers with/without controlled breathing. J. Med Biol. Eng. 2019, 39, 783–795. [Google Scholar] [CrossRef] [Green Version]
- Schafer, A.; Vagedes, J. How accurate is pulse rate variability as an estimate of heart rate variability? A review on studies comparing photoplethysmographic technology with an electrocardiogram. Int. J. Cardiol. 2013, 166, 15–29. [Google Scholar] [CrossRef] [PubMed]
- Zou, G.Y. Sample size formulas for estimating intraclass correlation coefficients with precision and assurance. Stat. Med. 2012, 31, 3972–3981. [Google Scholar] [CrossRef] [PubMed]
- Liao, J.J. Sample size calculations for an agreement study. Pharm. Stat. 2010, 9, 125–132. [Google Scholar] [CrossRef] [PubMed]
- Stratford, P.W.; Binkley, J.; Solomon, P.; Finch, E.; Gill, C.; Moreland, J. Defining the minimum level of detectable change for the Roland-Morris questionnaire. Phys. Ther. 1996, 76, 359–365. [Google Scholar] [CrossRef] [PubMed]
- Head, H. Aphasia and Kindred Disorders of Speech; Cambridge University Press: Cambridge, UK, 1926. [Google Scholar]
- Bleiberg, J.; Garmoe, W.S.; Halpern, E.L.; Reeves, D.L.; Nadler, J.D. Consistency of within-day and across-day performance after mild brain injury. Neuropsychiatry Neuropsychol. Behav. Neurol. 1997, 10, 247–253. [Google Scholar] [PubMed]
EPA6-BioPatch | EPA6-HMPC | EPA6-HMHH | |
---|---|---|---|
Mean RR (ms) | [−46.04, 18.03] | [−133.63, 95.12] | [−71.14, 28.74] |
SDNN (ms) | [−36.36, 33.02] | [−63.90, 44.73] | [−30.23, 33.99] |
HRMean (1/min) | [−2.18, 4.77] | [−8.99, 14.22] | [−3.06, 7.10] |
STD HR (1/min) | [−5.15, 4.33] | [−5.31, 3.43] | [−2.40, 2.69] |
RMSSD (ms) | [−34.64, 25.25] | [−95.48, 32.73] | [−34.33, 25.04] |
LF/HF Power | [−9.81, 13.47] | [−5.85, 14.02] | [−8.74, 13.04] |
High frequency %PSD | [−40.00, 48.68] | [−67.57, 40.21] | [−35.12, 45.37] |
Low frequency %PSD | [−18.42, 26.15] | [−19.55, 59.76] | [−18.91, 24.47] |
EPA6 | BioPatch | HMPC | HMHH | |
---|---|---|---|---|
Mean RR (ms) | 140.11 ± 20.81 | 131.88 ± 19.40 | 137.20 ± 19.80 | 129.62 ± 9.26 |
SDNN (ms) | 31.19 ± 15.57 | 34.33 ± 21.52 | 26.62 ± 6.55 | 23.87 ± 6.34 |
HRMean (1/min) | 13.85 ± 3.24 | 12.69 ± 2.32 | 12.54 ± 1.96 | 12.55 ± 3.04 |
STD HR (1/min) | 2.883 ± 1.383 | 3.173 ± 1.788 | 3.044 ± 0.582 | 2.452 ± 0.415 |
RMSSD (ms) | 37.53 ± 8.91 | 44.70 ± 23.52 | 38.13 ± 6.65 | 33.50 ± 8.53 |
LF/HF Power | 6.861 ± 4.594 | 6.631 ± 4.858 | 4.896 ± 2.507 | 8.365 ± 7.601 |
HF Power (%PSD) | 25.774 ± 6.284 | 21.486 ± 1.716 | 33.826 ± 8.110 | 29.726 ± 8.390 |
LF Power (% PSD) | 29.100 ± 7.315 | 24.106 ± 3.707 | 32.658 ± 7.915 | 31.680 ± 9.291 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Killian, J.M.; Radin, R.M.; Gardner, C.L.; Kasuske, L.; Bashirelahi, K.; Nathan, D.; Keyser, D.O.; Cellucci, C.J.; Darmon, D.; Rapp, P.E. Alternative Devices for Heart Rate Variability Measures: A Comparative Test–Retest Reliability Study. Behav. Sci. 2021, 11, 68. https://doi.org/10.3390/bs11050068
Killian JM, Radin RM, Gardner CL, Kasuske L, Bashirelahi K, Nathan D, Keyser DO, Cellucci CJ, Darmon D, Rapp PE. Alternative Devices for Heart Rate Variability Measures: A Comparative Test–Retest Reliability Study. Behavioral Sciences. 2021; 11(5):68. https://doi.org/10.3390/bs11050068
Chicago/Turabian StyleKillian, Jacquelin M., Rachel M. Radin, Cubby L. Gardner, Lalon Kasuske, Kylee Bashirelahi, Dominic Nathan, David O. Keyser, Christopher J. Cellucci, David Darmon, and Paul E. Rapp. 2021. "Alternative Devices for Heart Rate Variability Measures: A Comparative Test–Retest Reliability Study" Behavioral Sciences 11, no. 5: 68. https://doi.org/10.3390/bs11050068
APA StyleKillian, J. M., Radin, R. M., Gardner, C. L., Kasuske, L., Bashirelahi, K., Nathan, D., Keyser, D. O., Cellucci, C. J., Darmon, D., & Rapp, P. E. (2021). Alternative Devices for Heart Rate Variability Measures: A Comparative Test–Retest Reliability Study. Behavioral Sciences, 11(5), 68. https://doi.org/10.3390/bs11050068