2.7.3. Virtual Reality Headsets

In recent years, virtual reality (VR) systems have become available to the public for entertainment, especially video gaming. VR headsets have gyroscopes that detect head movement and gaze trackers, allowing the user to immerse into a 3D virtual environment that shifts according to his/her head and eye movements. This technology is particularly useful in visual field testing because it eliminates the problem of fixation loss. The accuracy of a conventional visual field test relies on the subject to fixate at the target for the duration of the test. This is no longer necessary with a VR system, because the stimulus position can be adjusted based on the change in fixation. Tsapakis et al. [62] had 20 patients use virtual reality glasses hooked up to a computer. They ran a software that used a fast-threshold 3-decibel step staircase algorithm to test 52 points scattered across 24 degrees of visual field from fixation. The patients were asked to click the mouse whenever a stimulus was seen. This VR visual field test had a high correlation coefficient of 0.808 when compared with the Humphrey Visual Field. VR visual field systems that are commercially available include the Advanced Vision Analyzer (Elisar; New City, NY, USA), the C3 Field Analyzer (Remidio; Glen Allen, VA, USA), the PalmScan VF2000 Visual Field Analyzer (Micro Medical Devices; Calabasas, CA, USA), Virtual Field (Virtual Field; New York, NY, USA), VirtualEye Perimeter (BioFormatix; San Diego, CA, USA), VisuALL (OllEyes Inc, Summit, NJ, USA), and Vivid Vision Perimetry (Vivid Vision, San Francisco, CA, USA).

In addition to fixation loss, current visual field tests rely on patients to minimize the false-positive rate (meaning hitting the clicker when there is no stimulus presented) and the false-negative rate (meaning not hitting the clicker when the stimulus should be seen, based on previous responses). To eliminate the "human factor" of visual field testing, a VR headset called NGoggle was developed to detect multifocal steady-state visual-evoked potentials when a stimulus is presented. The headset consists of a wireless electroencephalogram, an electrooculogram, and a head-mounted display. In a study where glaucoma was diagnosed based on stereo photographs of the optic discs, Nakanishi et al. [63] found that the NGoggle had a higher AUC (0.92) than SAP mean deviation (0.81), SAP mean sensitivity (0.80), and SAP mean pattern standard deviation (0.77), suggesting that NGoggle may be better at detecting glaucoma than SAP. A VR headset that can detect visual-evoked potentials may prove to be more accurate and efficient than the current gold standard of SAP, setting a paradigm shift in visual field testing.

## *2.8. Artificial Intelligence*

In ophthalmology, deep learning in artificial intelligence (AI) has become a hot topic as it demonstrated remarkable accuracy in the detection of disease. Deep learning is a machine learning technique that uses multiple layers of an artificial neural network to extract high-level features from raw data and generate an output. This design is inspired by how neurons connect with each other in the human brain. In order for the machine to generate a highly accurate neural network, it needs to be fed a massive set of data that encompass all variations. To make the diagnosis of glaucoma and recommend the appropriate management plan via telehealth, the ophthalmologist takes into account the IOP, visual field test report, fundus photography, OCT, and other available test results. Having a specialist review the data may not be necessary in the future, as artificial intelligence technology becomes more powerful with the ability to self-learn such as a human.

A number of deep learning AI systems have been developed to diagnose glaucoma based on optic disc photographs. The European Optic Disc Assessment Study [64] compared the performance of the Pegasus v1.0 (Visulytix Ltd., London, UK) AI software with that of ophthalmologists and optometrists in diagnosing glaucoma from stereoscopic optic disc photographs. Pegasus was able to diagnose with an accuracy of 83.4%, which was statistically similar to the accuracies of ophthalmologists (80.5%) and optometrists (80%). Eyes that truly had glaucoma were identified by glaucoma specialists who saw reproducible visual field scotomas that matched the appearance of the optic discs. Several other studies demonstrated that certain deep learning parameters can achieve high accuracy with AUC > 0.9 and sensitivity and specificity levels > 90%; false-positive and false-negative results were commonly due to pathologic myopia [65–70]. Even with the use of different fundus cameras, deep learning artificial intelligence was able to achieve an AUC > 0.9, provided that image augmentation was performed [71]. Al-Aswad et al. [72] used data from the Singapore Malay Eye Study to determine how the Pegasus deep learning system performed compared with ophthalmologists in diagnosing glaucoma solely based on fundus photographs. They found that Pegasus outperformed five out of six ophthalmologists and took only 10% of the time the ophthalmologists did in diagnosing glaucoma. Remarkably, Medeiros et al. [73] showed that by training a deep learning algorithm to match disc photographs with OCT RNFL scans, the machine was able to predict the average RNFL thickness based on the fundus photograph with a high Pearson correlation coefficient of 0.832 and a mean difference of 7.39 microns. In fact, when deep learning artificial intelligence was used on fundus disc photographs taken over time, it was able to identify eyes with worsening glaucoma based on a decreasing predicted RNFL thickness [74,75].

In addition to the analysis of fundus photographs, deep learning artificial intelligence can be trained to diagnose glaucoma based on OCT RNFL and the ganglion cell-inner plexiform layer (GCIPL) scans with high accuracy (AUC > 0.9) [76–81]. In fact, one study showed that a deep learning model trained with OCT images outperformed SAP and mean circumpapillary RNFL thickness in detecting glaucoma [76]. If the deep learning algorithms are exposed to OCT nerve images paired with visual field data, the machine is able to predict visual field parameters accurately [82,83].

Deep learning can also play a role in diagnosing angle closure based on OCT anterior segment images. Fu et al. [84] developed a deep learning system to detect angle closure and tested it on 8270 OCT anterior segment images (of which 895 had angle closure as classified by clinicians). The system achieved an AUC of 0.96 with a sensitivity of 0.90 and specificity of 0.92. Xu et al. [85] applied deep learning methods on 4036 OCT anterior segment images (of which 2093 had closed angles) in the Chinese–American Eye Study and found that the ResNet-18 classifier achieved an AUC of 0.952.

Machine learning of visual field data may have utility in diagnosing pre-perimetric glaucoma. Asaoka et al. [86] reported that a deep feed-forward neural network classifier had an AUC of 92.6% in diagnosing pre-perimetric glaucoma based on Humphrey Visual Field 30-2 data. In addition to diagnosis, deep learning has been shown to predict future visual field progression. Wen et al. [87] used various deep learning algorithms on more than 30,000 Humphrey Visual Field 24-2 reports and found that the Cascade-Net5 performed the best in forecasting visual fields up to 5 years later, with a pointwise mean absolute error (2.47 dB), significantly less than that of the rate of progression linear models (3.77–3.96 dB) and the pointwise regressed linear model (3.29 dB). Yousefi et al. [88] reported that machine learning analysis detected visual field progression earlier (at 3.5 years) than global (at 5.2 years), region-wise (at 4.5 years), and point-wise (at 3.9 years) linear regression analyses.
