Abstract
(1) Background: While smartphones are among the primary devices used in telemedical applications, smart TV healthcare apps are not prevalent despite smart TVs’ penetrance in home settings. The present study’s objective was to develop and validate the first smart TV-based visual acuity (VA) test (Democritus Digital Visual Acuity Test (DDiVAT)) that allows a reliable VA self-assessment. (2) Methods: This is a prospective validation study. DDiVAT introduces several advanced features for reliable VA self-testing; among them: automatic calibration, voice recognition, voice guidance, automatic calculation of VA indexes, and a smart TV-based messaging system. Normal and low vision participants were included in the validation. DDiVAT VA results (VADDiVAT) were compared against the ones from: (a) the gold-standard conventional ETDRS (VAETDRS), and, (b) an independent ophthalmologist who monitored the self-examination testing (VARES). Comparisons were performed by noninferiority test (set at 2.5-letters) and intraclass correlation coefficients (ICCs). DDiVAT’s test-retest reliability was assessed within a 15-day time-window. (3) Results: A total of 300 participants (185 and 115 with normal and low vision, respectively) responded to ETDRS and DDiVAT. Mean difference in letters was −0.05 for VAETDRS–VARES, 0.62 for VARES–VADDiVAT, and 0.67 for VAETDRS–VADDiVAT, significantly lower than the 2.5 letter noninferiority margin. ICCs indicated an excellent level of agreement, collectively and for each group (0.922-0.996). All displayed letters in DDiVAT presented a similar difficulty. The overall accuracy of the voice recognition service was 96.01%. ICC for VADDiVAT test-retest was 0.957. (4) Conclusions: The proposed DDiVAT presented non-significant VA differences with the ETDRS, suggesting that it can be used for accurate VA self-assessment in telemedical settings, both for normal and low-vision patients.
1. Introduction
It is a truism that the National Healthcare Systems (NHS) are under pressure in order to address the constantly increasing ophthalmological needs of their beneficiaries. Modern lifestyle and increased life-expectancy result in an exponential increase in the overall costs of ophthalmological care. Among the primary ocular diseases that escalate care provision costs are age-related macular degeneration (ARMD) and diabetic retinopathy, since their increasing prevalence results in a growing number of patients with irreversible visual acuity (VA) damage [1,2]. Since VA reduction deteriorates the overall visual capacity, it exerts a devastating impact on the productivity and the quality of life [3,4]. Efficient management of sight-threatening diseases revealed the importance of telemedicine, which is further boosted by the technological advancements in the smart hardware and networking. Numerous telemedical services in ophthalmology have been introduced [5,6], such as the screening of diabetic retinopathy [7], ARMD [8], glaucoma [9], and amblyopia [10].
Smartphones have traditionally been used in telemedicine programs, primarily as sensor interfaces, since their high prevalence, about 80% on a worldwide scale, makes them the primary devices for continuous health-data collection [11,12]. Smart TVs are another technology with high prevalence in the general public. More than 120 million Americans owned a smart TV in 2021 [13]. Although smart TVs lack the smartphones’ mobility, their big, high-resolution screens allow for the development of diagnostic tests that cannot be performed on t small smartphone or tablet screens. Despite that fact, only a few smart TV health-related applications have been introduced, mainly lifestyle oriented [14,15].
In a hospital setting, VA is the primary clinical parameter for the screening and the diagnosis of the majority of ophthalmological diseases. Therefore, it is no surprise that VA’s importance has also been indicated in telemedical settings. Several conventional VA charts and reading tests were converted to digital applications [16,17], while others were developed solely as digital applications in order to support telemedical initiatives [18,19].
However, the telemedical screening of VA cannot be reliably performed using a smartphone or even a tablet, primarily due to the fact that the required distance between the patient and the screen is at least three meters. A full conventional VA examination requires presentation of five symbols in a row with at least logMAR 1 size. To our knowledge, none of the commercially available smartphones or tablets has a screen large enough to support logMAR 1 VA assessment.
Accordingly, it becomes obvious that smart TVs have the technical potential to support telemedical VA examination and replicate a full conventional VA testing with five symbols in a row, at distances of at least 3 m. Within this context, the primary objective of this study was to develop and validate a smart TV-based VA test (Democritus Digital Vision Acuity Test (DDiVAT)) for the reliable assessment of VA in any telemedical setting.
2. Materials and Methods
2.1. Setting
This was a prospective study that was divided into a developmental and a validation phase. The protocol adhered to the tenets of the Declaration of Helsinki, and was approved by the Institutional Review Board of Democritus University of Thrace. Written informed consent was provided by all participants. Official registration number of the study is NCT04739137.
2.2. Development of the Democritus Digital Visual Acuity Test-DDiVAT
DDiVAT’s study objectives required that a series of fundamental prerequisites had to be addressed: (a) In addition to assisted VA testing (i.e., by a caregiver in a remote facility), DDiVAT had to support VA self-examination; (b) In addition to normal vision patients, even low-vision ones had to be able to perform the VA self-examination; (c) No specialized hardware (other than a smart TV and a smartphone with internet connection) should be used; (d) The overall DDiVAT service should be cloud-based.
2.2.1. DDiVAT System Architecture
To address the aforementioned prerequisites, DDiVAT was built as a System-as-a-Service (SAS) with the following primary components: (a) a DDIVAT administration site; (b) a smart TV application (TV-app); and (c) a smartphone app (SP-app). DDiVAT’s VA self-examination mode required several advanced services; among them: (a) voice guidance; (b) voice recognition; and (c) automatic calculation of VA scores. In detail, DDiVAT SAS consisted of:
- -
- An Application Programming Interface (API), accessible by the cloud infrastructure. The representational state transfer—REST API—was selected. The API provides the implemented functionality to all communication between the components (TV-app, SP-app, and the administration site). It manages all related information and it provides the smart feature of voice recognition. MongoDB AtlasTM was selected as the appropriate non-SQL database (SQL: Structured Query Language);
- -
- A smart TV application (TV-app) developed in Kotlin and built for Android that offered all the functionality, except for the voice recognition feature;
- -
- A smartphone application (SP-app) developed in Kotlin and built for Android that performed: (a) application control and navigation; (b) voice guidance; (c) communication with the voice recognition cloud service; and (d) pairing with the TV-app;
- -
- An administrator’s site, built using framework Angular 9, available over any internet browser for every smart device such as smartphone, tablet, or PC that provided all the functionality.
Internet connections between the system’s components were implemented using websockets. DDIVAT’s SAS is presented in Figure 1. The unified modeling language (UML) diagram for the voice recognition is shown in Figure 2, along with the relevant smartphone and smart TV screens.
Figure 1.
DDiVAT System-as-a-Service (SAS) explained.
Figure 2.
DDiVAT’s unified modeling language (UML) diagram for the voice recognition service.
2.2.2. Examination Modes of DDiVAT
Conventional VA tests contain a set of characters, symbols, or phrases of progressively smaller size, that are read by the patient from a predefined distance. Modern tests such as the MNREAD [20], the DDART [17,18], and the ETDRS [21] use a logarithmic scale to progressively reduce the size of the reading text by a factor of 10−0.1, between two consecutive sizes. The condition that must be met for VA of Snellen fraction 20/20, or 85 letters, or logMAR = 0, or visual acuity score-VAS = 100, is defined as reading text with a size such that the apparent angle of each character is δφ = 5 min of arc. The height H of a character at any logMAR, when viewed from distance D, is given by the Formula (1):
DDIVAT introduced two modes for remote VA examination: (a) the operator-assisted mode (OAM) and (b) the self-examination mode (SEM). OAM requires the installation of the DDiVAT TV app (TV-app) in a smart TV, while SEM requires the installation of the TV app on a smart TV and the installation of the DDiVAT smartphone app (SP-app) on a smartphone. The TV-app and the SP-app were connected via internet to a dedicated DDiVAT server. The server established a connection between the two smart devices, managed the patients, stored VA measurements in a database, hosted a web interface for the care provider to administer the overall DDiVAT service, and enabled a chat-mode communication between the patient and the care provider through the TV-app. DDiVAT’s control flow is schematically demonstrated in Figure 3.
H = D tanδφ 10logMAR
Figure 3.
DDiVAT Control Flow chart.
In OAM, a caregiver (operator) navigates DDiVAT via the TV remote control and manually inputs the examinee’s reading errors in the TV-app. In fact, OAM simulates a conventional VA examination in a remote setting. In SEM, all advanced DDiVAT features are enabled and both TV-app and SP-app work synergistically: the patient is operating DDiVAT by the smartphone, responding to verbal instructions from the SP-app, while reading errors are identified automatically by DDiVAT’s voice recognition service. To ensure that even low-vision patients with VA logMAR 1 will be able to use DDiVAT, a simple, color-based interface was designed. All user actions are performed using the four colored buttons of the smart TV remote control, or alternatively the four virtual color buttons in the smartphone screen that correspond to the same colored virtual buttons in the TV screen, as shown in Figure 4.
Figure 4.
DDiVAT color-based navigation: (a) DDiVAT smart TV-app; (b) Smart TV remote control; (c) DDiVAT SP-app.
Although DDiVAT uses a high-end voice recognition service in order to identify the patient’s responses, an additional verification step has been implemented, which ensures that potential failures of the service will not compromise DDiVAT’s accuracy. The verification step suggests that each patient’s response to a letter that is identified by DDiVAT, is displayed in the TV-screen in a logMAR 1.0 size (logMAR 1 verification). Then, the examinee confirms whether the displayed letter is the letter that he/she had actually said, by pressing the corresponding button in the SP-app (Figure 5).
Figure 5.
logMAR 1 verification step: (a) DDiVAT’s TV-app verification screen; (b) DDiVAT’s SP-app verification screen.
2.2.3. Automatic Text Size Calibration of DDiVAT
In conventional clinical settings, VA assessment is usually performed at distances between 3 and 4 m (depending on the size of the examination room). According to Equation (1), a typical distance D = 300 cm requires height H of a character corresponding to logMAR = 0 to be equal to 4.36 mm. DDiVAT allows a variable, user-defined examination distance with default largest character size of logMAR = 1.0.
The TV-app automatically acquires the physical size and pixel resolution of the smart TV screen, calculates the size of the displayed letters according to the examination distance, and terminates the examination when no smaller logMAR can be displayed (each character requires at least a matrix of 5 × 5 pixels to be correctly displayed). DDiVAT allows testing up to logMAR = −0.3, provided that the TV screen has sufficient resolution.
2.3. Validation of DDiVAT
2.3.1. Participants
Participants were enrolled from the outpatient service of the Department of Ophthalmology in a consecutive-if-eligible basis. The eligibility criteria were the following:
- (1)
- Age between 18 and 75 years;
- (2)
- Best spectacle-corrected distance VA (BSCDVA) ≤ 1.0 logMAR (≥35 letters);
- (3)
- Spherical equivalent (SE) between −8.00 D and +6.00 D.
The exclusion criteria were the following:
- (1)
- Diagnosis of neurological, mental and/or psychiatric disease, irrespective of medication for these diseases;
- (2)
- Inability to understand study objectives;
- (3)
- Recent eye surgery less than one month ago.
Patients with BSCDVA < 0.6 logMAR (>55 letters) populated the Normal Vision Group (NVG), while the rest of the patients populated the Low Vision Group (LVG). DDiVAT’s test-retest reliability for all participants was completed within a 15-day window.
2.3.2. Examination Process—Data Collection
DDiVAT’s validation was performed using a 4K (3840 × 2160) 55-inch Android smart TV, and a 6.7-inch Android smartphone. The brightness of the TV and smartphone screen was kept constant in all tests. All measurements were made under the same conditions for all participants. BSCDVA was evaluated in one randomly selected eye for each study participant with the conventional ETDRS test [21] at a 3 m distance. This variable was named VAETDRS and was calculated in logMAR and letters. Subsequently, the participant underwent a 30-min training course on the objectives and the operation of DDiVAT and was asked to perform a self-examination (SEM) in the presence of an independent researcher, who was not allowed to interact with him/her. The researcher during SEM was writing down: (a) the letters actually displayed on the TV-app for each logMAR (LettersTV); (b) the letters identified by the participant (LettersPar); and (c) the letters recognized by the voice recognition service (LettersDDiVAT). At the end of the examination, DDiVAT automatically calculated BSCDVA in logMAR and letters and this variable was named VADDiVAT. The researcher also calculated the score of BSCVA, which was named VARES, based on the letters that each participant identified during the examination process (LettersPar). Summarizing, for each participant the following clinical parameters were calculated:
- (1)
- Monocular BSCDVA measured with the conventional ETDRS (VAETDRS);
- (2)
- Monocular BSCDVA measured by the researcher through the DDiVAT examination (VARES);
- (3)
- Monocular BSCDVA automatically calculated by the DDiVAT application (VADDiVAT).
Following SEM testing, each participant responded to a structured questionnaire that pertained to his/her views on the DDiVAT test, and familiarization with smart technology (Appendix A).
The main measured quantities are summarized in Table 1.
Table 1.
Definition and measurement of main variables.
2.3.3. Statistical Analysis
According to an a priori power analysis, for an effect size of 0.53 of the BSCDVA, 298 participants would be required for the study to have a power of 0.8 at the significance level of 0.05. The Shapiro–Wilk test assessed the deviation in the parameter values from the normal distribution. For the normally distributed data, the mean ± standard deviation (SD) was used, while for the non-normally distributed data, the median and the interquartile range (IQR) [25%, 75%] were used. All statistical analyses were performed with the MedCalc version 20.0.0 (Med-Calc Software, Mariakerke, Belgium).
A noninferiority test was performed between VAETDRS, VARES, and VADDiVAT with a margin of 2.5 letters according to former reports [22]. The level of agreement among the three VA methods was evaluated with the intraclass correlation coefficients (ICCs) and using Bland–Altman plots. Test-retest reliability of the VADDiVAT was evaluated by ICCs.
To gain further insight into the difficulty that each letter presented to the participants and the accuracy of the voice recognition service, the following confusion matrices were constructed: (a) the letters actually presented in the TV-app (LettersTV) versus the letters read by the participants (LettersPar); and (b) the letters read by the participants (LettersPar) versus the letters identified by the voice recognition service (LettersDDiVAT).
3. Results
From 378 enrolled participants, 79.3% fulfilled the study mandates (300 participants, 144 men, 156 women). A total of 185 populated the NVG and the remaining 115 participants the LVG, with a median BSCDVA of 0.22 logMAR and 0.78 logMAR, respectively. Median age was 69 years with non-significant differences between the NVG and LVG (p = 0.809). A total of 86 study participants (28.5%) had no eye pathology, 70 (23.3%) had exudative age-related macular degeneration (AMD), 18 (6.1%) non-exudative AMD, 76 (25.3%) had diabetic macular edema, 8 (2.67%) had branch retinal vein occlusion, 2 (0.67%) had retinal detachment, 24 (8%) had cataract, 4 (1.32%) had Irvine–Gass syndrome, 2 (0.67%) had macular hole, 2 (0.67%) had corneal transplantation, and 5 (16.7%) had glaucoma. A total of 51% of the NVG and 33% of the LVG participants owned a smartphone, while 42% (NVG) and 32% (LVG) had a smart TV in their home setting. Demographic characteristics and clinical parameters of the two groups are shown in Table 2.
Table 2.
Demographic Characteristics and Clinical Parameters.
The VAETDRS, VARES, and VADDiVAT are presented in Table 3 for all participants as well as for NVG and LVG. Figure 6, Figure 7 and Figure 8 show Bland–Altman plots evaluating differences between VAETDRS and VARES, between VARES and VADDiVAT, and between VAETDRS and VADDiVAT, (in letters) for both NVG and LVG.
Table 3.
Comparison of median [IQR 1] of BSCDVA 2 (in logMAR).
Figure 6.
Bland-Altman plots comparing VAETDRS and VARES in NVG (blue) and LVG (red).
Figure 7.
Bland-Altman plots comparing VAETDRS and VADDiVAT in NVG (blue) and LVG (red).
Figure 8.
Bland-Altman plots comparing VARES and VADDiVAT in NVG (blue) and LVG (red).
Mean differences in letters (VAETDRS–VARES), (VARES–VADDiVAT), and (VAETDRS–VADDiVAT), as well as the corresponding 95% confidence interval (CI) are presented in Figure 9. The noninferiority margin was set at 2.5 letters (equivalent to 0.05 logMAR). It becomes obvious that the CI for the difference between the VAETDRS and VARES is an almost symmetrical round 0-letter. On the other hand, the CI between VAETDRS –VADDiVAT and VARES–VADDiVAT lies on the right side of the 0-letter line. This is expected since the accuracy of the voice recognition service is 96.01%, which means that: (a) a small number of letters correctly identified by the examinee will be wrongly recognized by the SP-app’s voice recognition service and accounted as an error (false negative); or (b) in extreme cases, the examinee wrongly identifies the displayed letter but the voice recognition service of the SP-app recognizes the patient response as correct (false positive). However, the latter scenario is highly improbable (probability of wrong character recognition × probability of wrongly recognizing the displayed letter = 0.039 × 1/24). Therefore, it is expected that the VADDiVAT will be consistently worse than both VAETDRS and VARES, but in all cases within the 2.5-letter noninferiority margin.
Figure 9.
Noninferiority analysis (95% CI and mean difference value as “*”, using a 2.5-letter margin), of VAETDRS vs. VARES (denoted as “∆”), VAETDRS vs. VADDiVAT (denoted as “o”), and VARES vs. VADDiVAT (denoted as “□”) for NVG (black), LVG (red), and all patients (blue).
ICCs and LoAs for BSCDVA are presented in Table 4. For all comparisons (VAETDRS vs VARES, VARES vs VADDiVAT, VAETDRS vs VADDiVAT), ICCs indicated excellent level of agreement for both groups (ICCs: NVG: from 0.970 to 0.991; LVG: from 0.922 to 0.968), and for all participants (from 0.988 to 0.996).
Table 4.
Intraclass correlation coefficients for study participants.
The confusion matrix of the letters displayed in the smart TV (LettersTV) versus the letters that were read by the participants (LettersPar) is presented in Figure 10. The sum of each row is the actual number of appearances of each letter in the TV-app, whereas the sum of each column is the number of times that each letter was said by the examinees. The numbers in the main diagonal correspond to the letters that were correctly identified by the examinees. It has to be mentioned that, although DDiVAT displays only 10 Latin characters (A, B, E, H, K, O, P, T, Y, X), the confusion matrix contains almost all the characters of the alphabet, since the examinees occasionally incorrectly read non-displayed characters. This could be verified by the fact that only the rows of the aforementioned Latin characters have a non-zero sum. The star symbol “*” indicates cases in which the examinee declared that he/she was unable to read the displayed letter. The true positive rate (TPR) (or recall, or sensitivity) and false negative rate (FNR) in examinee-reading for each letter is summarized in the additional column on the rightmost of the confusion matrix. The row at the bottom of the confusion matrix shows the positive predicted value (PPV) or precision, and the false discovery rate (FDR) of patient-reading for each letter.
Figure 10.
Confusion matrix of the letters displayed in DDiVAT’s TV-app (LettersTV) versus the letters read by the participants (LettersPar).
All displayed letters by the TV-app presented a similar difficulty to the examinees, as demonstrated in Figure 11, which shows the percentages of correct read out of each letter.
Figure 11.
Percentage of correct identification for each letter by the examinees.
The confusion matrix of the letters read by both NVG and LVG participants (VARES) versus the letters that were automatically recognized by the SP-app (VADDiVAT) is presented in Figure 12. The sum of each column is the number of times the specific letter was recognized by the voice recognition service. The sum of each row is the number of times the specific letter was read by the participants, which should be equal to the sum of the corresponding column of the confusion matrix in Figure 10. The column on the right side summarizes the true positive rate (TPR) (or recall, or sensitivity) and false negative rate (FNR) in automatic letter recognition by the smartphone. The row at the bottom of the plot shows the positive predicted value (PPV) or precision, and the false discovery rate (FDR) of letter recognition.
Figure 12.
Confusion matrix of the letters read by participants (LettersPar) versus the letters recognized by the smartphone (LettersDDiVAT).
The sensitivity of the voice recognition service is presented in Figure 13. The majority of the letters was identified with over 90% sensitivity, except from letter “I”; however, with no apparent impact on DDiVAT’s reliability since “I” is not among the letters included in the VA test.
Figure 13.
Percentage of letters correctly identified by the voice recognition service (sensitivity). The number of each letter’s appearance is shown at the top of each column. Blue bars: Latin letters used in the test and appearing in TV-app; orange bars: letters not included in the test, thus not appearing in TV-app.
Participants’ responses to the questionnaire and comparisons with their demographic profile are presented in Table 5. Younger age, male gender, smartphone and smart TV use were associated with better readiness to use a telemedical application such as the DDiVAT.
Table 5.
Responses of study participants.
The ICC for the test-retest of the VADDiVAT was 0.957. Of the 300 participants that completed the SEM re-testing within the 15-day window, 34 had to repeat the 30-min training course to refamiliarize with the test mandates.
4. Discussion
It is a truism that there is a growing demand for digital healthcare services. According to conservative estimates, more than 200,000 health-related applications were available on iTunes and Google Play stores in 2018 and their number of downloads increased from 1.3 billion in 2013 to 3.7 billion in 2017 [23]. Moreover, in 2017 more than 75% of Americans stated that mobile technology was important to managing their health.
Despite the fact that smart TVs show impressive prevalence to Western societies, smart TV health-related applications primarily focus on lifestyle. Unlike smartphones, smart TVs have not been used as clinical data-collecting devices [24], although their big, high-contrast, and high-resolution screens make them ideal for displaying text and/or symbols for distance vision acuity examination. It is known that in distance VA examination, the height H of a character or a symbol at any logMAR, when viewed from distance D, is derived from Formula (1). Since a distance VA examination requires the presentation of one line of five characters or symbols, a length L of at least 10 characters is required (assuming square fonts). Therefore, the length L can be calculated by Formula (2):
L = 10H = 10∙D∙tanδφ∙10logMAR,
Assuming a screen with an aspect ratio of 16:9, the diagonal size d in inches is derived in the following Formula:
which can be further simplified:
Applying Equation (4) in a conventional distance VA examination setting at 3 or 4 m, we can easily assess the minimal size of the screen for a full testing (Table 6).
Table 6.
Required screen diagonal (inches).
It becomes obvious that a screen size of 20.3 inches is the absolute minimum for a full distance VA examination.
Regardless of the technical details of the healthcare apps, their reliability should be supported by evidence-based medical outcomes. However, the majority of Western countries lack regulatory oversight for the provided digital healthcare apps, while global access to them makes their regulation even more difficult. Questionable reliability of any digital healthcare app could mislead the general public and contribute to poor overall disease management. Therefore, only validated apps can ensure the accuracy of their measured outcomes [25].
To alleviate concerns on the accuracy of DDiVAT’s VA measurements, we applied a validation phase against ETDRS, the gold-standard VA test, allowing a 2.5 letter non inferiority margin [22], which is extremely strict even for repeated ETDRS testing in clinical settings. The self-assessment mode of DDiVAT that incorporates a high-end voice recognition service achieved a two letter difference, which is highly acceptable both for clinical and for research settings. The non-significant difference between DDiVAT and ETDRS measurements was achieved by: (a) the automatic identification of the smart TV’s screen characteristics for the accurate display of letters according to the examinee’s distance; and (b) the logMAR 1 voice recognition verification step that allows the examinee to repeat his/her response when needed.
This study combines smart TV and cloud infrastructure that creates new possibilities in the screening and follow-up of ophthalmological and systemic diseases that present with reduced visual acuity. VA is the fundamental clinical parameter for the screening and follow-up of ARMD and diabetic retinopathy, the leading causes for irreversible damage to the visual capacity in Western societies. Smart TVs play a crucial role in home care, since they are the most convenient gadget for entertainment, news briefing, and communication for seniors, who are the primary target population for sight-threatening diseases. With the validation of the DDiVAT medical application, smart TVs become a reliable medical data-collecting device. Thus, the importance of DDiVAT becomes self-evident [1,2].
However, seniors are not the only target population for reliable smart TV-based VA testing. Smart TVs and tablets are the first technological gadgets that preschool minors become familiar with [26]. In fact, DDiVAT’s operator-assisted mode could be used for visual acuity examination of preschool minors, with his/her guardian or teacher as the application’s operator. Therefore, it may contribute to the prevention of amblyopia and vision-related learning disabilities, by increasing the awareness of the parents, especially in vulnerable populations [10,27].
The role of reliable smart TV based VA self-testing becomes even more important in pandemics that result in major reductions in ophthalmological services with many beneficiaries omitting necessary care, because of fear of infection, inability to access, or cancelation of health services [28,29,30]. It is a common belief that in pandemics such as the COVID-19 one, large populations were subjected to unnecessary or inappropriate care, with potentially significant harm to their visual capacity.
However, DDiVAT offers more advantages than accurate VA measurements, which are listed below: (a) the OAM attempts to engage the examinee and the family with the eye disorder, which facilitates optimal disease management, especially in chronic diseases [18]; (b) DDiVAT’s innovative bidirectional communication in which the physician’s messages appear on the patient’s TV screen fosters the patient–doctor bond; and (c) the global access to DDiVAT and the automatic voice recognition service could support multilanguage screening initiatives with minimal modifications to the application.
The novelty of this study which reports on a smart TV-based distance VA examination excludes direct comparisons with former similar publications. Despite that fact, an extensive literature review was attempted. The validation of a web-based application for the assessment of distance VA and of the refractive error was recently reported [31], using a smartphone and a computer. Unfortunately, the results of the study showed lower reliability and accuracy, especially in low-vision patients. The validation of the T-Assito and the Motiva projects which use a smart TV were also recently reported. T-Assito included emergency calls and notifications related to users’ health, while the Motiva introduced a smart TV-based service for monitoring vital signs in patients with chronic diseases [19].
Certain limitations of the study should be taken into consideration prior to the interpretation of our outcomes. Although DDiVAT is linguistically adaptable to any European country, it was validated in Greek-speaking populations. SEM testing in other European languages should be treated with caution, unless a validation study confirms the accuracy of the voice recognition service in that particular language. However, DDiVAT’s OAM can be safely used in any telemedical setting, regardless of the language of the subject population. Finally, it has to be noted that any digital chart cannot assess refractive errors, outside clinical settings and without specialized healthcare personnel.
5. Conclusions
In conclusion, the results show that DDiVAT is smart TV application that provides reliable distance VA measurements, both in normal and low-vision patients. Our validation outcomes suggested comparable reliability with the ETDRS which is the gold-standard for distance VA examination. However, contrary to the ETDRS, DDiVAT supports self-examination, and can be used in any remote setting, provided that a smart TV and an internet connection are present. Within this context, DDiVAT introduces a new potential in teleophthalmology both for screening and follow-up initiatives.
Author Contributions
G.L. conceived and supervised the study, designed the validation phase, contributed to the data interpretation, and drafted the manuscript; K.D. contributed to the design and the implementation of DDiVAT application, to the data interpretation, figure preparation, and drafted the manuscript; E.-K.P. contributed to the figure preparation, data acquisition, analyzed and interpreted the data, made the statistical analysis and drafted the manuscript; V.P. contributed to the implementation of DDiVAT application and data interpretation; M.B. contributed to the data acquisition, data analysis, data interpretation, and statistical analysis; C.P. contributed to the data acquisition, data analysis, and data interpretation; D.D. contributed to the data analysis and data interpretation; P.N. contributed to the data analysis and data interpretation. All authors critically revised the manuscript for important intellectual content, had full access to all the data in the study and had final responsibility to submit for publication. All authors have read and agreed to the published version of the manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was supported by a research grant from Bayer Hellas.
Institutional Review Board Statement
The protocol adhered to the tenets of the Declaration of Helsinki, and approved by the Institutional Review Board of Democritus University of Thrace.
Informed Consent Statement
Written informed consent was provided by all participants.
Data Availability Statement
Not applicable.
Conflicts of Interest
The present study was funded by a research grant from Bayer Hellas. The supporting source had no involvement or restrictions in study design, collection, analysis, and interpretation of data or writing of the paper. The authors declared no potential conflict of interest with respect to the research, authorship, and/or publication of this article.
Abbreviations
| API | Application Programming Interface |
| ARMD | Age-Related Macular Degeneration |
| BSCDVA | Best Spectacle-Corrected Distance Visual Acuity |
| CI | Confidence Interval |
| COVID-19 | Coronavirus Disease 19 |
| DDART | Democritus Digital Acuity & Reading Test |
| DDiVAT | Democritus Digital Vision Acuity Test |
| ETDRS | Early Treatment of Diabetic Retinopathy Study |
| FDR | False Discovery Rate |
| FNR | False Negative Rate |
| ICC | Intraclass Correlation Coefficient |
| IQR | interquartile range |
| LoA | Limits of Agreement |
| logMAR | Logarithm of the Minimum Angle of Resolution |
| LVG | Low Vision Group |
| MNREAD | Minnesota Low Vision Reading Test |
| NHS | National Healthcare Systems |
| NVG | Normal Vision Group |
| OAM | Operator-Assisted Mode |
| PC | Personal Computer |
| PPV | Positive Predicted Value |
| REST | Representational State Transfer |
| SAS | System-as-a-Service |
| SD | Standard Deviation |
| SEM | Self-Examination Mode |
| Smart TV | Smart Television |
| SP-app | Smartphone app |
| SQL | Structured Query Language |
| TPR | True Positive Rate |
| UML | Unified Modeling Language |
| VA | Visual Acuity |
| VAS | Visual Acuity Score |
Appendix A
Questionnaire

References
- GBD 2019 Blindness and Vision Impairment Collaborators, Vision Loss Expert Group of the Global Burden of Disease Study. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160. [Google Scholar] [CrossRef]
- Gopalakrishnan, S.; Velu, S.; Raman, R. Low-vision intervention in individuals with age-related macular degeneration. Indian J. Ophthalmol. 2020, 68, 886–889. [Google Scholar] [CrossRef] [PubMed]
- Brown, M.M.; Brown, G.C.; Sharma, S.; Landy, J.; Bakal, J. Quality of life with visual acuity loss from diabetic retinopathy and age-related macular degeneration. Arch. Ophthalmol. 2002, 120, 481–484. [Google Scholar] [CrossRef] [PubMed]
- Shah, V.A.; Gupta, S.K.; Shah, K.V.; Vinjamaram, S.; Chalam, K.V. TTO utility scores measure quality of life in patients with visual morbidity due to diabetic retinopathy or ARMD. Ophthalmic Epidemiol. 2004, 11, 43–51. [Google Scholar] [CrossRef] [PubMed]
- Labiris, G.; Panagiotopoulou, E.K.; Kozobolis, V.P. A systematic review of teleophthalmological studies in Europe. Int. J. Ophthalmol. 2018, 11, 314–325. [Google Scholar] [CrossRef]
- Patel, S.; Hamdan, S.; Donahue, S. Optimising telemedicine in ophthalmology during the COVID-19 pandemic. J. Telemed. Telecare 2020, 28, 498–501. [Google Scholar] [CrossRef]
- Galiero, R.; Pafundi, P.C.; Nevola, R.; Rinaldi, L.; Acierno, C.; Caturano, A.; Salvatore, T.; Adinolfi, L.E.; Costagliola, C.; Sasso, F.C. The Importance of Telemedicine during COVID-19 Pandemic: A Focus on Diabetic Retinopathy. J. Diabetes Res. 2020, 2020, 9036847. [Google Scholar] [CrossRef]
- Mintz, J.; Labiste, C.; DiCaro, M.V.; McElroy, E.; Alizadeh, R.; Xu, K. Teleophthalmology for age-related macular degeneration during the COVID-19 pandemic and beyond. J. Telemed. Telecare 2020, 29, 1357633X20960636. [Google Scholar] [CrossRef]
- Odden, J.L.; Khanna, C.L.; Choo, C.M.; Zhao, B.; Shah, S.M.; Stalboerger, G.M.; Bennett, J.R.; Schornack, M.M. Telemedicine in long-term care of glaucoma patients. J. Telemed. Telecare 2020, 26, 92–99. [Google Scholar] [CrossRef]
- Sabri, K.; Moinul, P.; Tehrani, N.; Wiggins, R.; Fleming, N.; Farrokhyar, F. Video interpretation and diagnosis of pediatric amblyopia and eye disease. J. Telemed. Telecare 2021, 27, 116–122. [Google Scholar] [CrossRef]
- Calabrèse, A.; To, L.; He, Y.; Berkholtz, E.; Rafian, P.; Legge, G.E. Comparing performance on the MNREAD iPad application with the MNREAD acuity chart. J. Vis. 2018, 18, 8. [Google Scholar] [CrossRef] [PubMed]
- Radner, W.; Diendorfer, G.; Kainrath, B.; Kollmitzer, C. The accuracy of reading speed measurement by stopwatch versus measurement with an automated computer program (rad-rd©). Acta Ophthalmol. 2017, 95, 211–216. [Google Scholar] [CrossRef] [PubMed]
- Labiris, G.; Panagiotopoulou, E.K.; Chatzimichael, E.; Tzinava, M.; Mataftsi, A.; Delibasis, K. Introduction of a digital near-vision reading test for normal and low vision adults: Development and validation. Eye Vis. 2020, 7, 51. [Google Scholar] [CrossRef] [PubMed]
- Labiris, G.; Panagiotopoulou, E.K.; Duzha, E.; Tzinava, M.; Perente, A.; Konstantinidis, A.; Delibasis, K. Development and Validation of a Web-Based Reading Test for Normal and Low Vision Patients. Clin. Ophthalmol. 2021, 15, 3915–3929. [Google Scholar] [CrossRef] [PubMed]
- Iyengar, K.; Upadhyaya, G.K.; Vaishya, R.; Jain, V. COVID-19 and applications of smartphone technology in the current pandemic. Diabetes Metab. Syndr. 2020, 14, 733–737. [Google Scholar] [CrossRef] [PubMed]
- Bankmycell. How Many Smartphones Are in the World? Available online: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world (accessed on 5 August 2022).
- Statistica. Number of TV Households in the United States from Season 2000-2001 to Season 2021–2022 (in Millions). Available online: https://www.statista.com/statistics/243789/number-of-tv-households-in-the-us/ (accessed on 5 August 2022).
- Pires, G.; Lopes, A.; Correia, P.; Almeida, L.; Oliveira, L.; Panda, R.; Jorge, D.; Mendes, D.; Dias, P.; Gomes, N.; et al. Usability of a telehealth solution based on TV interaction for the elderly: The VITASENIOR-MT case study. Univers. Access Inf. Soc. 2022, 17, 1–12. [Google Scholar] [CrossRef]
- Costa, C.R.; Anido-Rifon, L.E.; Fernandez-Iglesias, M.J. An Open Architecture to Support Social and Health Services in a Smart TV Environment. IEEE J. Biomed. Health Inform. 2017, 21, 549–560. [Google Scholar] [CrossRef]
- Mansfield, J.S.; Ahn, S.J.; Legge, G.E.; Luebker, A. A new reading acuity chart for normal and low vision. Opt. Soc. Am. Technol. Digest 1993, 3, 232–235. [Google Scholar]
- Plainis, S.; Tzatzala, P.; Orphanos, Y.; Tsilimbaris, M.K. A modified ETDRS visual acuity chart for European-wide use. Optom. Vis. Sci. 2007, 84, 647–653. [Google Scholar] [CrossRef]
- Rosser, D.A.; Cousens, S.N.; Murdoch, I.E.; Fitzke, F.W.; Laidlaw, D.A. How sensitive to clinical change are ETDRSlogMAR visual acuity measurements? Investig. Ophthalmol. Vis. Sci. 2003, 44, 3278–3281. [Google Scholar] [CrossRef]
- Statistica. Number of mHealth App Downloads Worldwide from 2013 to 2017 (in Billions). Available online: https://www.statista.com/statistics/625034/mobile-health-app-downloads/ (accessed on 6 August 2022).
- Michard, F. Smartphones and e-tablets in perioperative medicine. Korean J. Anesthesiol. 2017, 70, 493–499. [Google Scholar] [CrossRef] [PubMed]
- Mathews, S.C.; McShea, M.J.; Hanley, C.L.; Ravitz, A.; Labrique, A.B.; Cohen, A.B. Digital health: A path to validation. NPJ Digit Med. 2019, 2, 38. [Google Scholar] [CrossRef] [PubMed]
- Lorusso, M.L.; Giorgetti, M.; Travellini, S.; Greci, L.; Zangiacomi, A.; Mondellini, M.; Sacco, M.; Reni, G. Giok the Alien: An AR-Based Integrated System for the Empowerment of Problem-Solving, Pragmatic, and Social Skills in Pre-School Children. Sensors 2018, 18, 2368. [Google Scholar] [CrossRef] [PubMed]
- Raghuram, A.; Gowrisankaran, S.; Swanson, E.; Zurakowski, D.; Hunter, D.G.; Waber, D.P. Frequency of Visual Deficits in Children With Developmental Dyslexia. JAMA Ophthalmol. 2018, 136, 1089–1095. [Google Scholar] [CrossRef] [PubMed]
- Moynihan, R.; Sanders, S.; Michaleff, Z.A.; Scott, A.M.; Clark, J.; To, E.J.; Jones, M.; Kitchener, E.; Fox, M.; Johansson, M.; et al. Impact of COVID-19 pandemic on utilisation of healthcare services: A systematic review. BMJ Open 2021, 11, e045343. [Google Scholar] [CrossRef] [PubMed]
- Crossland, M.D.; Dekker, T.M.; Hancox, J.; Lisi, M.; Wemyss, T.A.; Thomas, P.B.M. Evaluation of a Home-Printable Vision Screening Test for Telemedicine. JAMA Ophthalmol. 2021, 139, 271–277. [Google Scholar] [CrossRef]
- Bellsmith, K.N.; Gale, M.J.; Yang, S.; Nguyen, I.B.; Prentiss, C.J.; Nguyen, L.T.; Mershon, S.; Summers, A.I.; Thomas, M. Validation of Home Visual Acuity Tests for Telehealth in the COVID-19 Era. JAMA Ophthalmol. 2022, 140, 465–471. [Google Scholar] [CrossRef]
- Muijzer, M.B.; Claessens, J.L.; Cassano, F.; Godefrooij, D.A.; Prevoo, Y.F.; Wisse, R.P. The evaluation of a web-based tool for measuring the uncorrected visual acuity and refractive error in keratoconus eyes: A method comparison study. PLoS ONE 2021, 18, e0256087. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).












