Next Article in Journal
Would You Hold My Hand? Exploring External Observers’ Perception of Artificial Hands
Next Article in Special Issue
How Is Privacy Behavior Formulated? A Review of Current Research and Synthesis of Information Privacy Behavioral Factors
Previous Article in Journal / Special Issue
Exploring Learning Curves in Acupuncture Education Using Vision-Based Needle Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye

1
VRVis Zentrum fuer Virtual Reality und Visualisierung, 1220 Vienna, Austria
2
Department of Computer Science, Columbia University, New York, NY 10027, USA
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(7), 70; https://doi.org/10.3390/mti7070070
Submission received: 24 May 2023 / Revised: 17 June 2023 / Accepted: 24 June 2023 / Published: 6 July 2023

Abstract

:
To create a truly accessible and inclusive society, we need to take the more than 2.2 billion people with vision impairments worldwide into account when we design our cities, buildings, and everyday objects. This requires sympathy and empathy, as well as a certain level of understanding of the impact of vision impairments on perception. In this study, we explore the potential of an extended version of our vision-impairment simulation system XREye to increase sympathy and empathy and evaluate its educational value in an expert study with 56 educators and education students. We include data from a previous study in related work on sympathy and empathy as a baseline for comparison with our data. Our results show increased sympathy and empathy after experiencing XREye and positive feedback regarding its educational value. Hence, we believe that vision-impairment simulations, such as XREye, have merit to be used for educational purposes in order to increase awareness for the challenges people with vision impairments face in their everyday lives.

1. Introduction

Across the world, an increasing number of people experience vision impairments or blindness. The WHO estimates that vision impairments affect more than 2.2 billion people worldwide [1]. Some impairments, such as refractive errors, are quite common and are easily corrected with the use of vision aids, such as glasses or contact lenses. However, there are eye diseases, such as age-related macular degeneration (AMD), that have a long-term impact on eyesight and visual function, sometimes even leading to the complete loss of central vision when left untreated [2]. For people with healthy eyes, it may be difficult to discern how the world looks to a person who cannot rely on unimpaired eyesight. Medical explanations in books and articles, descriptions from patients, 2D images of impaired vision, and even 3D computer simulations are often insufficient to communicate the negative effects of vision impairments and to truly make someone understand how the world looks through the eyes of a visually impaired person. Relatives, employers, and even the medical providers of those impacted by eye diseases may benefit from realistic and immersive simulations of vision impairments, to increase understanding, sympathy, and empathy. Creating supportive networks for those impacted is critical to ensuring their inclusion and comfort when experiencing distressing and often life-altering conditions. Further, these simulations can aid in ensuring that industrial designers, software developers, and educators design their content to be accessible to those with special visual needs. For architects, these visualizations can help in developing accessible designs. For standardization committees, this can aid in ensuring that standards are written taking those with vision impairments into account. Combined, this can help to increase the accessibility of public spaces and make participation for everyone—regardless of visual ability—truly possible.
Research on digital vision-impairment simulations has increased in the last decade. There is now substantial work across a variety of hardware and software solutions addressing different conditions and impairments. There have been several efforts to simulate different eye diseases and impairments using physical goggles with special lenses [3,4], 2D videos and simulations [5], and dynamic extended reality (XR) systems with immersive representations of conditions [6]. While the artifacts above have been used to recreate the effects of various eye diseases, studies [7] of these technologies have focused on how these impairments affect perception and not the secondary social impacts of their deployment. Additionally, many of these simulations (1) are simplified depictions of the vision of affected people, and only very few (2) use eye tracking for gaze-dependent effects or build their simulations (3) based on medical expertise or firsthand experience from affected people. Without addressing these three concerns, the benefits of the technology cannot be realized, especially when weighed against possible reticence by the medical community to endorse inaccurate representations of the conditions.
Over the last few years, we have developed a methodology, and implemented a new system, to simulate medically-informed complex eye-disease patterns and conduct quantitative and qualitative user studies on vision impairments in XR [8,9,10,11,12,13]. We introduced an effects pipeline that supports combining multiple simulated effects that can be adjusted individually, a symptom-calibration methodology to calibrate all simulated symptoms to the same level of severity for different users, and a symptom-matching method that facilitates the adjustment of the simulation together with patients that have an eye disease in one eye and clear vision in the other eye. The most recent version of our demo application, XREye [10], uses eye tracking and efficient post-processing effects that run in real time, to create XR simulations of various vision impairments. These simulations were designed based on detailed reports from patients affected by the conditions and were developed in close collaboration with ophthalmologists, informed by their professional experience and expert knowledge. XREye is, in turn, built on the simulation framework developed in our earlier work, which focused on simulating cataracts in virtual and augmented reality (VR and AR) [8,9], but includes a wider range of simulations for common conditions that affect visual perception: refractive errors (myopia, hyperopia, and presbyopia), cornea disease, and AMD (wet and dry). It also features eye-tracking support, enabling the representation of gaze-dependent effects for the simulation of symptoms that only affect certain parts of the visual field.
The severity and characteristics of the symptoms of eye diseases can vary greatly from one person to another. To account for this variability of severity and be able to adjust the simulations to a wide range of characteristics of the involved symptoms, XREye offers a set of parameters that can be used to modify each simulated symptom individually at runtime. Furthermore, users wearing a head-worn display (HWD) can experience our simulated vision impairments in VR, video–see-through AR, or through high-dynamic-range (HDR) 360° images, while an operator adjusts the exposed parameters to control the severity of the simulated symptoms and seamlessly switch between viewing modes at runtime. To the best of our knowledge, our system is still the most comprehensive solution for vision-impairment simulations in XR today.
The main goal of many vision-impairment simulations [14,15,16,17], including XREye, is to mitigate barriers to feeling empathy for those impacted. However, whether a simulation achieves this goal or not is hardly ever evaluated. With this work, we aim to close this gap and present a first step towards exploring the impact of vision-impairment simulations on sympathy and empathy, as well as evaluating the educational value of such simulations. We use our own vision-impairment simulation system XREye as a case study and pose the following research questions:
Q1: Are the vision-impairment simulations provided by XREye able to increase sympathy and empathy for people with vision impairments?
Q2: Are vision-impairment simulations such as XREye useful for educating people about the effects of vision impairments on perception?
We present an evaluation of XREye, a medically informed XR simulation of common eye diseases using an XR HWD, by conducting an expert study with educators and education students. Our work contributes the following:
  • Assessing the educational value of vision-impairment simulations such as XREye;
  • Evaluating the effectiveness of XREye to increase sympathy and empathy towards people with vision impairments;
  • Investigating the possible benefits immersive experiences can provide for these purposes;
  • Presenting our approach to simulate achromatopsia (severe loss of color vision, commonly also referred to as “color blindness”) in XR.
Additionally, we provide a survey of the state-of-the-art in vision impairment simulation using XR technology and present a comprehensive description of all the vision impairments currently simulated by XREye.

2. Related Work

2.1. XR and Vision Impairments

Over the last few decades, there have been a number of research and industrial efforts to accurately simulate vision impairments. This earlier work fits broadly into three categories: physical goggles with special lenses meant to simulate particular conditions, overlays of videos and simulations viewed on 2D displays, and XR systems that present dynamic representations of the impairments. These systems usually targeted specific conditions and were often limited in their realism, immersiveness, and adjustability.
Much of the earliest work, dating to the 1940s, used physical goggles and other obstructions to artificially reduce visual capabilities with the goal of helping people with normal sight better understand the impact of low vision [3,4]. Zimmerman’s Low Vision Simulation Kit [18] used goggles with exchangeable lenses to represent different conditions. Later, Zagar and Baggarly [19] also utilized goggles to help student pharmacists understand how patients with various ocular diseases and vision impairments might interact with medication. Specific goggles were created per condition, including glaucoma, cataracts, macular degeneration, diabetic retinopathy, and retinitis pigmentosa.
Over the last decade, these types of physical instruments have been used to study how different conditions impact a patient’s life. Wood et al. [20] used goggles to help understand the impact of cataracts and refractive blur on night driving, showing the heaviest degradation in capability caused by the former. Hwang et al. [21] built on this and used a Bangerter Occlusion Foil on plano lenses to simulate cataracts, showing how headlight-induced glare reduces pedestrian visibility for those with the condition.
With the introduction of more powerful computing and display technologies, researchers moved to developing a number of static and dynamic 2D simulations. Greenberg [22] recreated the Farnsworth–Munsell 100-hue test using computer monitors to assesses color vision deficiency, with a particular focus on presenting images as they would be seen by dichromats. Later, Brettel et al. [23] and Viénot et al. [24] both presented simulations of dichromacy by reprojecting images using LMS color spaces for view on monitors. Expanding to additional conditions, Banks and Crindle [5] recreated visual effects of several ocular diseases (including glaucoma and AMD) by creating overlays and filters for 2D images viewed on a desktop display.
These technologies also help application and web developers to better accommodate impaired users. Simulators such as VisionSimulations.com [25], Goodman-Deane et al. [26], and Leventhal [27] allow developers to assess the accessibility of websites, with the latter visualizing visual snow, glare, cataracts, and floaters, among others. Over the last 15 years, a new generation of tools has appeared, taking advantage of the emergence of 3D game engines to create more dynamic and immersive simulations. Lewis et al. [28,29] presented a set of simulations built using post-processing effects rendered in Unreal Engine 3, allowing for the simulation of several conditions in explorable 3D environments. Experts found the simulations educational and useful, even given limitations on customizability of the visualizations.
In the last decade, XR technology has become more immersive, powerful, and affordable, running on commodity consumer-grade hardware, with a number of manufacturers integrating eye-tracking technology. This novel technology allows for simulations with the immersiveness of goggles and the fidelity and customizability of 3D desktop systems. Additionally, the introduction of eye tracking facilitates the creation of much more dynamic visualizations that respond to a user’s gaze. For particular conditions (e.g., glaucoma), this is critical to properly representing their appearance.
Werfel et al. [14], Väyrynen and et al. [30], and Zavlanou and Lanitis [31] all developed VR simulations of vision impairments. While immersive and representative of the conditions, they were limited in their ability to tune symptoms to virtually change the severity and characteristics of the conditions. Additionally, state-of-the-art hardware has allowed for much more realistic renderings. Jones and Ometto [6] utilized eye tracking to achieve near real-time rendering for their XR simulations of disability glare, blur, spatial distortions, and perceptual filling-in, as well as defects in color vision. Furthermore, in their latter work [32], they simulated glaucoma with gaze-dependent region blur, generated using perimetric data of a real patient. In a similar vein, Zhang et al. present an AR system that allowed for the simulation of peripheral vision loss [33], utilizing monochrome LCDs to create virtual blind-spots overlaid on the user’s view of the real world.
Commercially available apps (e.g., the Novartis ViaOpta Simulator [34] and VisionSim [35] from the Braille Institute) simulate vision impairments on a smartphone screen but are, consequently, monoscopic, limited in field of view, and unable to properly represent gaze-dependent effects.
XREye [10] builds on all of these approaches by simulating complex eye diseases, such as AMD, cornea disease, or achromatopsia, comprising multiple symptoms in real time with eye tracking, making it possible to use these simulations in XR HWDs while minimizing the risk of VR sickness. Most of the aforementioned earlier simulations are often simplifications of the respective vision impairments they aim to simulate. In an attempt to create more realistic representations, we collaborated closely with ophthalmologists and modeled our simulations on their expertise and reports from patients. With these additional considerations, XREye achieves more sophisticated visualizations of complex eye disease patterns.

2.2. XR for Sympathy and Empathy

The immersive factor of XR and the ability to simulate different spatial contexts and viewpoints for users has increased research interest in XR as an empathy-enabling medium over the past decade [36]. In broad terms, empathy can be defined as the capacity of feeling another person’s emotions. It is deeper than sympathy, the capacity to feel with the other, i.e., not the same feeling of the other, but rather a feeling of sorrow and care to the emotion of others [37]. Sympathy and empathy are interconnected, but it is the possibility of “seeing through the eyes of others” using XR HWDs that makes the technology especially interesting for researching empathetic responses. For example, VR has become a prominent medium for nonfiction storytelling, with many documentaries making viewers experience the position of the subject of the story, aiming for an empathetic connection [38].
Empathy is a key factor in understanding the lives and symptoms of people with disabilities or age-related diseases in order to provide appropriate treatment. Therefore, the impact of using XR to foster empathy in students of medicine, nursing, and health sciences has been a topic of research. Campbell et al. [39] reported statistically relevant changes in pre/post surveys and improvement trends in empathy levels for Alzheimer’s dementia patients in undergraduate baccalaureate nursing students. Adefila et al. [40] also reported a statistically significant improvement in compassion among students for dementia patients in a pilot study in a faculty of health and life sciences. The study also interviewed the participants three months after the experience and collected testimonials on their interactions with dementia patients. Insights gained from interviews suggest that a VR experience might change work practices in the long term. Oh et al. [41] discuss the potential limitations of immersive technologies in fostering empathy. In their investigation of how embodied perspective-taking may help mitigate ageism and intergenerational bias, they found a more significant effect for participants who embodied an elderly person in an immersive virtual environment compared to those who engaged in a traditional perspective-taking exercise via mental simulation when a threat was presented without intergroup contact. However, they could not find a substantial difference between the two methods in the participants’ behaviors when they were exposed to a concrete and experiential intergroup threat. Paananen et al. [36] presented a systematic literature survey of recent work on empathy in XR and pointed out that, while current research presents evidence of XR experiences generating empathetic responses in the users, this impact tends to be limited, and not much research has been carried out to design guidelines for applications that foster long-term empathy in users.
The work by Guarese et al. [42], which focused on assistive technology, presented an approach for sympathy and empathy evaluations based on the Ad Response Sympathy (ARS) and Ad Response Empathy (ARE) tests from Escalas and Stern [37]. Guarese et al. developed an AR device as an assistive technology for micro-guidance in simple object-placement tasks in a kitchen environment. They evaluated its effect on sympathy and empathy in an experiment with blindfolded individuals. To gather data, they conducted an online survey as a pre-test to collect sighted people’s self-perception of sympathy and empathy for individuals with vision impairments. They also conducted a post-test with another group of sighted people after testing the developed assistive technology solution. Additionally, they assessed blind and visually impaired people’s perception of sympathy and empathy from others and compared these assessments, along with data from the pre-test and post-test. The results showed a significant increase in sympathetic and empathetic responses from the sighted people who participated in the experiment.
In an attempt to allow designers to test the usability and accessibility of their websites and desktop applications, Choo et al. [15] presented Empath-D to enable Empathetic User Interface Design. Using a Samsung Gear VR with a Galaxy Note 4, their application was aimed at helping designers account for motor problems. They could simulate a set of conditions by utilizing impairment profiles, with the ability to represent interactions of a person with a certain impediment, as well as visual impairments. This was later extended by Kim et al. [16] to work in a Virtual Environment for the design of smartphone apps. Empath-D was more recently evaluated by Choo et al. [17] using an augmented virtuality impairment simulation of a mockup of the Instagram app. The goal was to investigate usability challenges for patients with cataracts when using mobile apps. While the intensity of each effect was adjustable at run-time, the included impairment simulations were mostly simplified versions of the respective eye diseases, without eye tracking. Even though this system was developed to facilitate empathetic user interface design, its effects on the empathy levels of users have not been evaluated.
While previous work on simulating vision impairments took large steps towards the goal of increased awareness and understanding for those suffering from these impairments, an evaluation of the educational value and effect of vision impairment simulations on sympathy and empathy is still missing. With the study of XREye [10] presented in this work, we sought to understand whether more realistic, immersive, and tunable visualizations could indeed increase the sympathy and empathy expressed towards those with the conditions and build upon the work of Guarese et al. [42] to evaluate our results.

3. Vision-Impairment Simulations

XREye [10] is built with Unreal Engine 4 and enables users to experience different vision-impairment simulations, including refractive errors (myopia, hyperopia, presbyopia), cornea disease, age-related macular degeneration (AMD), and, after our recent extension, also achromatopsia. XREye offers different viewing modes: VR, video–see-through AR, and an HDR 360° image viewing mode. Users can experience our simulated vision impairments while exploring a virtual model of a kitchen in VR, including a custom lighting setup and physically plausible lighting simulation, accomplished with the light-planning software HILITE [43]. This setting is especially useful for demonstrating blinding effects caused by some vision impairments when direct light sources are in the field of view of the user. In the AR mode, users can experience their actual surroundings, as recorded by the front-facing cameras of the HWD, modified with our vision-impairment simulations. This enables users to try real-life tasks under simulated impairments, such as attempting to read a magazine or recognize the face of a person standing in front of them. The HDR 360° image viewing mode provides a very realistic-looking environment (although no actual 3D geometry) and has significantly better image quality than the video stream provided by the HWD cameras in the AR mode. Hence, it is a suitable mode to compare different levels of severity of simulated symptoms when parameters are modified at runtime.
To develop our simulations, we used the HTC Vive Pro HWD with a Pupil Labs 200 Hz binocular eye tracker add-on, which allowed us to move gaze-dependent visual effects, such as dark areas in the central vision commonly caused by AMD, along with the user’s gaze. We included the following vision-impairment simulations in the newest demo version of XREye, used in our expert study.

3.1. Refractive Errors: Myopia, Hyperopia, Presbyopia

Myopia (also known as near-sightedness or short-sightedness) is a condition that leads to blurred vision in the distance while leaving the near vision intact. This happens because the eye grows too long and the lens focuses light in front of the retina [44]. The opposite happens when the eye grows too short and light is focused behind the retina. This results in vision where far objects are seen clear and sharp, but near objects appear blurred. This condition is called Hyperopia [2] and is also known as far-sightedness. Presbyopia also affects near vision, but happens due to the aging process and impacts the accommodation abilities of the eye [2].
We simulate the decreased visual acuity (VA) resulting from one of these types of refractive errors as a Gaussian blur applied to the image in video–see-through AR and our HDR 360° image viewing mode. This approximates the limited perception of short-sightedness quite well, but it does neglect the sharp rendering of very close objects. Our VR viewing mode already considers the distance of objects by using a depth-of-field effect (see Figure 1) provided by Unreal Engine 4, which can be used to simulate all three aforementioned refractive errors.

3.2. Cornea Disease

The transparent front layer of the eye, which covers the iris, pupil, and anterior chamber, is known as the cornea. A clear cornea is a prerequisite for sharp vision since it is responsible for the main part of the light refraction for image focusing. Inflammation, allergies, or injuries to the eye can affect the cornea and thereby degrade vision. Many of these conditions can cause severe discomfort due to swelling, irritation, redness, itchiness, burning, pain, or watery eyes. Pink eye (allergic conjunctivitis) or keratitis (an inflammation of the cornea that can be caused by wearing contact lenses) can also lead to increased sensitivity to light and blurry vision. In addition to these effects, other conditions such as corneal dystrophies (e.g., keratoconus, Fuchs’s dystrophy, lattice dystrophy, and map-dot-fingerprint dystrophy) can cause further fogging of the cornea, resulting in clouded vision due to material buildup in the cornea [45]. While each condition affects vision differently [2], common symptoms caused by many corneal diseases include the following:
  • Blurry vision;
  • Sensitivity to light; and
  • Cloudy vision.
Our simulation of cornea disease (see Figure 2) is modeled after the report of a person affected by the condition, describing it as “looking through opal glass”. To achieve this visual impression, we used our effects pipeline [13] to combine the following effects:
  • VA reduction;
  • Contrast reduction;
  • Color shift;
  • Texture blending; and
  • Bloom or glare.
First, we reduced the VA by applying a Gaussian blur to the image. Next, we reduced the contrast by interpolating between the image and a uniform gray image (0.5, 0.5, 0.5 in RGB). Then, we applied a slight color shift by interpolating between the image from the previous step and an adjustable target color (see Krösl et al. [9] for more details). To simulate cloudy vision, we created an Unreal Engine 4 material whose visual impression resembles opal glass. To achieve this, we used a high-frequency noise texture and softened it by interpolation with a white color image. In the next step of our effects pipeline, we used texture blending to blend together our two textures (the low-VA, contrast-reduced and color-shifted image, and our opal glass material) by interpolating between them. Finally, we added a bloom or glare effect around light sources or very bright areas, simulating the sensitivity to light. The final cornea-disease simulation can be fine-tuned and even adjusted at runtime by simply modifying the parameters that control the effects (e.g., by modifying interpolation weights, colors, blur, or bloom intensity). This allows us to create different depictions of corneal diseases, according to patient descriptions or expert knowledge from ophthalmologists. All parameters are exposed in a graphical user interface (GUI) on the computer and can be controlled by an operator. In the future, we also plan to make the GUI available for the user inside the HWD.

3.3. Age-Related Macular Degeneration

The macula is the part of the eye responsible for sharp, straight-ahead vision, as most cones are located in this central area of the retina. Age-related macular degeneration (AMD) has the highest prevalence in the older population and rarely affects people under the age of 50. It is a progressive disease that many people do not notice at first, but that can, in late stages, severely impact visual function and also cause loss of central vision. In early or intermediate stages of this condition, the macula becomes thinner, and fatty depots form under the photoreceptors (also known as drusen), which causes moderate and slowly progressive vision loss, often only noticed as mild blurriness of the central vision or as trouble seeing in low-lighting conditions. This form of AMD is called dry AMD. In the later stages of dry AMD, photoreceptive cells can be damaged, which leads to even more severe vision loss. A less common type of AMD is called wet AMD. It only evolves as a late stage of AMD and is caused by abnormal blood vessels that grow under the macula, leaking fluid and creating scar tissue [2,46]. As a consequence, AMD can include the following symptoms [2,46,47]:
  • Blurry vision;
  • Distorted vision (where straight lines are often perceived as wavy or crooked);
  • Faded and less bright colors;
  • Reduced contrast sensitivity; and
  • Loss of central vision.
Since mainly central vision is affected, this can make daily-life tasks such as shopping, cooking, cleaning, navigating independently and safely through the city, or even recognizing the faces of friends and relatives difficult.
To simulate AMD, we combined the following effects in our effects pipeline:
  • VA reduction (optional);
  • Distortion;
  • Desaturation;
  • Contrast reduction; and
  • Texture blending.
According to medical experts that we consulted, blurry vision is not one of the predominant symptoms of AMD, which is why we included the reduction of VA (by applying a Gaussian blur) as an optional stage in this effects pipeline. To simulate distorted vision caused by abnormal blood vessel growth and fluid under the retina, we distorted the UV coordinates according to a distortion texture. This distortion texture was created from a combination of a water texture material for small random distortions and some parameterized circular distortions that create inward or outward bulges (see our previous work [13] for implementation details). Next, we calculated the luminosity of the pixels to create a grayscale version of the image. The desaturation effect can then be adjusted at runtime by modifying the parameter controlling the interpolation between the original image and the grayscale version. Then, we reduced the contrast, as described in Section 3.2. Finally, we blended a dark gray circular gradient above the image to simulate reduced visual function of the central vision. Similar to our other simulations, the parameters that control the extent and characteristics of each of these effects used for the simulation of AMD can be adjusted at runtime using the GUI on the PC. Figure 3 shows an example of our simulation of wet AMD in AR.
According to our ophthalmology experts, late-stage dry AMD causes photoreceptors to fail due to geographic atrophy, which is a symptom that creates whole areas of loss of tissue. We originally simulated this effect by blending a black texture with clear edges over the central vision, instead of a gray gradient texture [10]. However, Thévin and Machulla [48] argued that central vision loss might not create visually black areas in the field of view, but rather areas where no information is perceived by the eyes. Thus, the brain fills in this missing information based on the surrounding area in the vision, similar to the perception of the physiological blind spot in any healthy eye. In future work, this could be simulated with perceptual filling-in [6].

3.4. Achromatopsia

Achromatopsia is a condition that is characterized by a partial or complete inability to see color. Complete achromatopsia, also called rod monochromacy, is commonly known as total color blindness. It is a rather rare condition, compared to other types of color blindness or color vision deficiency, affecting only about 1 in 30,000 people. We added achromatopsia to XREye for the expert study, because it is the type of color blindness that causes the most severe symptoms, which include the following:
  • Blurry vision;
  • Achromatic vision; and
  • Sensitivity to light.
In a healthy eye, the fovea is the area of sharpest vision (located inside the macula) because it has the highest density of photoreceptors. In the fovea, these photoreceptors are cones, which are used for daylight vision and, especially, color vision. People with complete achromatopsia have no working cones, so they have to rely solely on the information provided by their rods. Since rods are distributed over the whole retina (outside the fovea) and not concentrated in one area, the resolution of visual information obtained from these photoreceptors is lower than what is usually received from the cones in the fovea. Hence, vision becomes blurry. Furthermore, rods perceive only light intensity and are not able to distinguish colors. Therefore, people who do not have any working cones have only achromatic vision. In a healthy eye, rods are mainly used in low-light conditions. They are more sensitive to light and are, therefore, mostly deactivated in bright light. For achromatic people, rods cannot be deactivated in bright lighting conditions because their whole vision depends on the information provided by these photoreceptors. However, because of the high light sensitivity of rods, people with achromatopsia are also usually very sensitive to light. We simulated achromatopsia with the following effects:
  • Desaturation;
  • VA reduction; and
  • Bloom or glare.
To desaturate the image, we simply turned it into a grayscale image (for complete achromatopsia), by calculating the luminosity values of all pixels, or used the desaturation effect (for partial achromatopsia) of our AMD simulation (see Section 3.3). Then, we reduced the VA and applied a bloom, as described for our cornea disease simulation (see Section 3.2). Figure 4 shows a combination of these effects to simulate complete achromatopsia in our 360° image viewing mode.

4. Expert Study

The expert study was conducted in April 2023 during an information event on media education, digitization, and technical innovations at the University College of Teacher Education in Vienna, Austria. The event was designed for educators, regardless of their previous experience or knowledge about technology. Educators and education students from multiple Viennese schools were invited, and the organizers expected about 500 people to attend. The attendees could move freely between rooms with various demos and were introduced to different technologies, from VR to artificial intelligence or robotics. The aim of the event was to give teachers an overview of state-of-the-art technologies that have the potential to be used in an educational context. As an example of an XR application, the XREye project was invited to the event for a demonstration and allowed to perform a user study. This study included filling out a questionnaire after either testing the XR experience or watching another person test it. The current view of the user wearing the HWD was projected onto the whiteboard behind them for everyone to see.

4.1. Participants

Overall, 69 participants filled out the questionnaire, but 8 had to be excluded because they did not give written consent, and 5 were excluded because they did not answer whether they had tested the simulation using the HWD or just watched while other participants were testing it. Consequently, we used data from 56 participants for our evaluation. Of these, 25 experienced the simulations while wearing the HWD (referred to as users in our analysis), and the remaining 31 watched the users’ views projected onto the whiteboard (referred to as spectators in our analysis). Among respondents, 43 (76.78%) of our participants described themselves as female and 13 (32.21%) described themselves as male. (This uneven distribution roughly corresponds to the distribution of female- and male-identifying educators in the Austrian school system [49] and is, therefore, representative of our target group for this study.) All participants were between 18 and 64 years old (mean = 38.86, SD = 9.72). Notably, 29 (51.78%) participants had no previous experience with XR applications, 16 (28.57%) had tried XR applications once before, and 10 (17.85%) replied that they had experienced XR applications multiple times already, but no participant considered themselves as a “regular user”. One person (1.78%) did not answer this question.

4.1.1. Study Protocol

The demonstration took place in one of the classrooms of the venue where approximately 25 people could sit and observe. Event attendees who entered the demo room were given an information sheet about the project and the user study, as well as the questionnaire, and were invited to test the application (by putting on the HTC Vive Pro Eye). While a participant was in the process of testing the application, their view was projected onto a whiteboard. Figure 5 shows the study setting. While a user was testing the simulations, the application was controlled by a researcher on a notebook computer, switching between viewing modes, adjusting vision-impairment simulations, and always explaining which vision impairment was currently active. Two other researchers were present to assist the participants with the HWD and to answer any questions about the demo and the questionnaire. All simulations presented in Section 3 were shown to the users in random order. Each user experienced the simulations for approximately seven to eight minutes (in total). (Due to time constraints imposed by the study setting, we deactivated the eye tracker, which would otherwise require calibration for each user. Eye tracking for gaze-dependent effects was employed in our previous studies [9,11] but is not central to the expert study presented in this paper). Everyone who tested or watched the demo was encouraged to participate in the study by filling out the questionnaire (including written consent) but was not required to.

4.1.2. Questionnaire

The questionnaire consisted of 16 questions (the study was conducted in German; for the sake of comprehension, this paper contains translations of questionnaire items into English), preceded by a statement for which participants had to actively select the “yes” option to consent to be included in the user study and have their responses be used for our research project. The first question (1) asked if the participant had tested the demo themselves (if they were a user) or just watched the projected view of other participants (if they were a spectator). The next six questions were demographic questions about the participant’s (2) age range, (3) gender, (4) experience with XR applications, (5) teaching subject, (6) whether they had a vision impairment (other than wearing glasses for correction of refractive errors), and (7) whether they frequently interacted with people with vision impairments.
To answer research question Q1, we added questions about sympathy and empathy from Guarese et al. [42] as questionnaire items 8–10 (see Table 1). We chose these particular questions because they did not require participants to complete any specific tasks.
Questionnaire items 11–15 were designed to answer Q2 by evaluating the potential of vision-impairment simulation applications such as XREye for educational purposes. Items 11–13 asked participants to rate the possible pedagogical value of XREye or similar applications on a seven-point Likert scale and asked whether participants would consider using such technologies in a classroom or in a remote-learning setting. Item 14 was an open question to allow participants to suggest improvements to XREye. Item 15 asked participants to rate whether experiencing the demo (testing or watching it) had inspired them to adapt their teaching methods or materials or to use assistive technology in the future for students with vision impairments. Finally, item 16 was another open question, asking participants for their recommendations or ideas for how to make classrooms more accessible for students with vision impairments.
In order to validate the hypotheses we chose, our questionnaire was designed to evaluate whether vision impairment simulations, such as those provided by XREye, can improve sympathy and empathy, and whether education experts perceive pedagogical value in such tools and would like to use them for educational purposes. Future studies can build upon this foundation to assess usability and similar metrics while taking into account those assessment vectors.

5. Results

For our evaluation, we divided our expert-study participants into two groups: users, who tested the simulation themselves (using the HWD), and spectators, who watched live while users were testing the simulation. To be able to compare our results to a baseline (people who have neither tested nor watched our simulation), we decided to compare our results to the results from the “sighted pre-test” conducted by Guarese et al. [42]. The authors conducted an online study with sighted participants prior to their own user study, to assess their sympathy and empathy towards people with vision impairments, based on their own experience (not based on any experiment). We included their data (Guarese et al. [42] pre-test participants) as the baseline in our evaluation. Table 1 shows the questionnaire items included in our quantitative evaluation.

5.1. Evaluating the Effects of XREye on Sympathy and Empathy

To answer Q1, we needed to investigate if XREye was an effective tool for increasing sympathy and empathy for people with vision impairments. Figure 6, Figure 7 and Figure 8 show the ratings from users, spectators, and Guarese et al. [42] pre-test participants for items 8–10 as diverging stacked bar charts. These visualizations allowed us to compare the percentage of positive responses to negative (including neutral) responses. (This is useful when we consider every positive response as an indication that XREye has some positive effect on sympathy and empathy.) For all three items, we can see that users who tested the simulations themselves gave the highest number of positive ratings (compared to negative and neutral), closely followed by ratings from spectators, while ratings from Guarese et al. [42] pre-test participants were predominantly not positive. This is also reflected in the calculated means for all three groups.

Statistical Evaluation

For our statistical evaluation, we decided to test the following hypothesis:
Hypothesis 1 (H1).
Sighted people’s sympathy and empathy with people with vision impairments will increase after experiencing XREye’s vision-impairment simulations.
To test H1, we needed to determine if there were significant differences between the ratings given by our two independent expert-study groups (users and spectators) and the baseline (Guarese et al. [42] pre-test participants) for questionnaire items 8–10. Figure 9 shows boxplots of our data for these items.
To assess the normality of our data, we used the Shapiro–Wilk test because it is well suited for a smaller sample sizes ( n < 50 ) and specifically designed for normality testing. The results of this test showed that our data do not follow a normal distribution (for almost all items in our questionnaire). Hence, we used the Mann–Whitney U test (also known as the Mann–Whitney–Wilcoxon test, Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) to compare ratings from each of our expert-study groups to the baseline. We chose this test because it is a nonparametric test (does not assume or require normality) to analyze differences between two groups or conditions, works for independent samples (as given by our between-groups design), and can handle groups of different sizes. We performed one-sided Mann–Whitney U tests, with an alternative hypothesis that expected the median from the user or spectator groups to be higher than from the Guarese et al. [42] pre-test participants. We applied Bonferroni correction to account for multiple tests and reduce the likelihood of making a Type I error (incorrectly rejecting the null hypothesis that assumes no differences between groups). Our results in Table 2 show significant results for all six tests, with corresponding medium-to-large effect sizes (calculated with Cliff’s Delta, a non-parametric measure, which does not assume normality of the data and captures the direction of the difference in its sign).
The division of our expert-study participants into users and spectators took place because these two groups technically did not experience the same condition (one group testing the simulation with the HWD, the other group only watching it). However, we believed that watching someone test the simulation live and seeing the user’s XR view projected on a screen (or whiteboard) would be almost as effective as testing the simulation themselves. We assumed that XREye is a powerful enough simulation tool to increase sympathy and empathy, even if people do not test the simulation themselves, and, therefore, did not expect significant differences in ratings from users and spectators on questionnaire items regarding sympathy and empathy. To test this theory, we formulated the following hypotheses:
Hypothesis 1b (H1b).
There is no statistically significant difference in reported sympathy between people who tried the XREye demo and people who just observed someone trying the XREye demo;
Hypothesis 1c (H1c).
There is no statistically significant difference in reported empathy between people who tried the XREye demo and people who just observed someone trying the XREye demo.
We performed Mann–Whitney U tests with Bonferroni correction (for our non-normally distributed data) to compare ratings for items 8–10 (regarding sympathy and empathy) from users and spectators. The results in Table 3 show that there is a significant difference (with medium effect size) between ratings from users and spectators for item 10 (empathy), but we did not find evidence for any statistically significant differences for items 8 and 9 (both regarding sympathy). For all our tests, we performed a post hoc analysis of “observed power” through simulation to estimate the probability of correctly rejecting the null hypothesis when it is false, thus avoiding a Type II error (failing to reject the null hypothesis when the alternative hypothesis is actually true). Results are shown in Table 3 in the “Power” column.

5.2. Evaluating Educational Value

To answer Q2 and evaluate the educational value of vision-impairment simulations, such as those provided by XREye, we formulated the following hypotheses:
Hypothesis 2 (H2).
Technologies such as XREye have educational benefits;
Hypothesis 2b (H2b).
Educators want to use XREye or similar technologies to teach about lived experiences of people with vision impairments;
Hypothesis 2c (H2c).
After experiencing XREye, teachers would consider changing their teaching methods or adapting their material to better support students with vision impairments.
To test these hypotheses, we explored the responses to questionnaire items 11 (for H2), 12 and 13 (for H2b), and 15 (for H2c) provided by the participants in our expert study: educators and education students. Figure 10, Figure 11, Figure 12 and Figure 13 show the Likert-scale ratings for these items as diverging stacked bar charts. The bar on the top of each figure shows the responses of all experts who participated in the study (participants), with sample size, mean, and standard deviation. (Sample sizes vary between figures because not all participants responded to all questionnaire items.) The other two bars show ratings from users and spectators separately. As can be seen in these figures, for each questionnaire item, the percentage of positive ratings was larger than the negative and neutral ratings combined, and the percentage of positive ratings given by users who tested the simulation themselves was slightly higher than the percentage of positive ratings given by spectators. Despite this fact, for item 12, the mean rating from users was slightly lower than the mean rating from spectators (even though the number of positive ratings from users was higher).

6. Discussion

Many researchers who work on vision-impairment simulations aim to create tools that can help increase sympathy and empathy for people with vision impairments. However, to the best of our knowledge, there exists very little research [42] actually evaluating the effectiveness of such tools for this purpose. With our work, we set out to close this research gap by running a user study with experts in education to assess the educational value and applicability of vision-impairment simulations to increase sympathy and empathy, using the example of our own solution XREye. The following sections give an interpretation of our results from the perspectives of our hypotheses and answers to our research questions.

6.1. Sympathy and Empathy

Our results in Section 5.1 show that people who tried our simulation (users) or watched someone trying it (spectators) reported significantly higher feelings of sympathy and empathy towards people with vision impairments than people who have not seen or tried XREye (Guarese et al. [42] pre-test participants), proving H1. Figure 6, Figure 7 and Figure 8 show that the percentage of positive responses for the sympathy and empathy items was slightly higher for users than for spectators. However, our statistical analysis found no significant difference between the ratings of these two groups for the two sympathy items (8 and 9). Low statistical power for the respective tests indicates that there is a chance we incorrectly failed to reject the null hypothesis and that a higher number of users would be necessary to make a definitive statement about differences between users and spectators regarding sympathy. Hence, H1b is inconclusive. For item 10 (empathy), on the other hand, we found statistically significant results, showing that users gave higher responses than spectators. Therefore, H1c is false, which means that our results indicate that an immersive experience can increase empathy over a presentation of the imagery used in that experience on a 2D screen.
In summary, we can answer Q1 as follows: Our results show that the vision-impairment simulations provided by XREye are able to increase sympathy and empathy for people with vision impairments. In addition, we found that a live first-person immersive experience of these simulations has a higher effect on empathy than just watching a video of the XR view of someone else trying the simulation live. This is consistent with the definitions of sympathy and empathy [37] used in our work and with the research trend that “the embodiment of XR experiences can promote empathy" in users [36]. As sympathy is a feeling of caring for the emotions of others, it was already promoted in the spectators just by watching the projection of the view of the users. On the other hand, empathy (feeling the emotions of others) was significantly greater in users that actually “put themselves in the point of view of people who live with vision impairments" by testing the simulations themselves.
One limitation of our expert study is that it analyses only short-term sympathetic and empathic reactions to the XREye demo. In future work, a selected group of users should be invited for follow-up interviews to examine how their behaviors and opinions toward blind and visually impaired people continue after they tried the XREye simulations. We also note that the baseline (the data from pre-test participants of Guarese et al. [42]) to which we compare our results included a different demographic than that of our own study; however, no other similar data exist (to the best of our knowledge) and an overlap between the groups is present. The inclusion of the baseline is our best-effort attempt to compare quantitatively to the literature.

6.2. The Educational Value

To answer Q2 and evaluate the usefulness of vision-impairment simulations such as XREye for educational purposes, we decided to obtain feedback from experts in the field: educators and education students. This is why our study is not just a user study, but an expert study in which we showed the vision-impairment simulations of XREye to educators and education students and asked for their professional opinion and evaluation of the educational value of such solutions. We asked them to judge the pedagogical value of XREye and similar technologies, indicate whether they would use such technologies to teach students in a classroom or remote setting about vision impairments, and if they would consider adapting their teaching methods or using assistive technologies to accommodate visually impaired students after experiencing the XREye demo. Figure 10, Figure 11, Figure 12 and Figure 13 show a higher percentage of positive responses than negative or neutral responses to all the respective questionnaire items. To assess the value of the XREye simulations for educational purposes, we interpret these results as follows: If someone gives a negative rating (1–3 on the seven-point Likert scale), this obviously means they do not see any educational benefits of the system. We also count neutral ratings in this category, since neutral also means people do not see value in vision-impairment simulation for educational purposes. We consider any positive ratings (5–7) as an indication that, depending on the questionnaire item, people see educational values in our simulations, would use such technology in a classroom or remote setting, or found that XREye made them consider adjusting their teaching methods for students with vision impairments. Furthermore, all mean values for questionnaire items 11–13 and 15 are larger than 5, and, therefore, positive as well. These results support our hypotheses that (H2) technologies such as XREye have educational benefits, (H2b) educators want to use them for teaching, and (H2c) after experiencing the demo, educators would consider adapting their teaching methods.
Hence, to answer Q2, we can conclude that, based on the feedback of 56 experts (educators and education students), our study found that vision-impairment simulations such as XREye are useful for educating people about the effects of vision impairments on perception.

6.3. Qualitative Feedback

In addition to questions on demographics and Likert-scale items to collect quantitative data, we also included some open-ended questions to gather qualitative feedback in order to identify possible directions for future work.
We asked participants what improvements could be made to the XREye demo so that it could be integrated into educational activities. Ten expert-study participants answered this open question (Item 14). One user (P13) suggested a more photorealistic representation of the kitchen as an improvement. P9 suggested virtual environments used for the VR viewing mode could be “virtual classrooms, playrooms, outdoors”. Two participants mentioned they would like improvements to the hardware; P10 mentioned that VR goggles are still too heavy, meaning children can only wear the VR goggles for a short period of time, while P30 asked for “more modern equipment”. Participants also suggested organizing workshops (P10 and P66) and specialist training for teachers (P64). Furthermore, the possibility of having more than one HWD per classroom so that multiple students could use the demo at the same time was also mentioned as a possible improvement (P34). Based on these suggestions, we believe an interesting avenue for future work would be to port XREye to smartphones, so the simulations could be used with devices such as Samsung Gear VR or Google Cardboard, or even just in a video–see-through AR mode on a smartphone display. This would increase the reach of our application, allow many students to use the application at the same time, and, potentially, reduce the costs, since most students already own smartphones. In addition, this would be a lightweight alternative to using an HTC Vive Pro Eye, which might be more suitable for children or teenagers.
Related to Item 15 (see Table 1), Item 16 asked for recommendations or ideas on how to make the classroom accessible for students with vision impairments, that participants might have after they tested or watched the XREye demo. Only four participants answered this question. One user (P9) mentioned that they would consider increasing the scale of their teaching materials and using bright colors for their materials when teaching a blind or visually impaired student in the classroom.
Due to the small number of answers to the open questions, a more in-depth qualitative analysis was not possible. In future work, we suggest showing a demonstration of the simulations followed by semi-structured interviews with teachers and school directors to gain more insights and ideas on how to adapt and improve XREye for educational purposes.

Exploratory Post Hoc Analysis

Not connected to any formal hypothesis, we also conducted an exploratory analysis to investigate potential differences between ratings from users and spectators in their assessment of the educational value of XREye. Figure 14 shows the distributions of our data.
Tests for normality showed that our data were not normally distributed, so we performed Bonferroni-corrected Mann–Whitney U tests for all pairwise comparisons. Even though Figure 10, Figure 11, Figure 12 and Figure 13 show a higher percentage of ratings given by users than spectators, our statistical analysis did not yield any evidence of significant differences between the ratings of these two groups (when applying Bonferroni correction), as can be seen in Table 4. A larger user study could give more insight into potential differences and show whether users who tried an immersive experience judge the educational value of vision-impairment simulations differently than spectators who only watch the simulations on a screen.

6.4. Future Work

In the future, we plan to add simulations of other vision impairments, such as our cataract simulation [11], to XREye and integrate support for additional hardware, including handheld devices such as smartphones, as well as HWDs with depth cameras (or use machine learning to estimate depth from RGB images) to enable depth-dependent effects in AR. As was evidenced by the spectator condition in our study, there is still a benefit to non-immersive variations of the experience. By providing vision-impairment simulations on smartphones in the future, we could widen the reach of our applications, since many people already own a smartphone and would not need to buy expensive XR hardware. Hence, this could also make it easier to use applications such as XREye in classrooms, avoiding additional hardware costs for schools. We also believe that stand-alone devices with ever-advancing capabilities, such as Meta Quest Pro, could be interesting platforms for simulating vision impairment. Applications such as XREye could, in particular, benefit from a seamless transition between VR and AR viewing modes. We also plan to run additional studies with educators, affected people, and ophthalmology students to refine our simulations and investigate the applicability of XREye as a training tool for medical personnel.

7. Conclusions

We evaluated the effects that vision-impairment simulations can have on sympathy and empathy towards people with vision impairments and the educational value of such tools. As a case study, we used XREye, which provides a set of real-time eye-tracked XR simulations of vision impairments, previously developed based on reports from affected people and the medical expertise of ophthalmologists. We described the system components used to create the simulations and presented a newly added simulation of achromatopsia. We conducted a user study with 56 experts in education and compared our results to data from related work, which we included in our study evaluation as a baseline. Our study found that the vision-impairment simulations provided by XREye were able to increase sympathy and empathy for people with vision impairments. Furthermore, our results indicate that vision-impairment simulations, such as XREye, are useful for educating people about the effects of vision impairments on perception and that educators would be interested to use such tools for educational purposes. Based on the qualitative feedback obtained from our expert study, we believe porting XREye to smartphones in the future could improve its usefulness in educational settings. For future studies, we plan to include tasks in the simulated environments and conduct semi-structured interviews with experts (after testing the experience) to improve XREye as an asset for accessible education.

Author Contributions

Conceptualization, K.K., M.L.M., S.F. and C.E.; methodology, K.K. and C.E.; software, K.K.; validation, K.K. and M.L.M.; formal analysis, K.K., M.L.M. and C.E.; investigation, K.K., M.L.M. and M.H.; resources, K.K. and S.F.; data curation, M.L.M. and M.H.; writing—original draft preparation, K.K., M.L.M., M.H., S.F. and C.E.; writing—review and editing, K.K., S.F. and C.E.; visualization, K.K. and M.L.M.; supervision, S.F. and C.E.; project administration, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

VRVis is funded by BMK, BMAW, Styria, SFG, Tyrol, and Vienna Business Agency, in the scope of COMET—Competence Centers for Excellent Technologies (879730), which is managed by FFG. Elvezio and Feiner were funded by National Science Foundation Grant IIS-1514429.

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data presented in this study are not publicly available. This is because the expert-study participants were informed that the raw data from their submitted questionnaires would not be shared outside the research team.

Acknowledgments

The authors would like to thank Roman Haas and BiM—Bildung im Mittelpunkt GmbH for inviting us to demo XREye at their information event on media education, digitization, and technical innovations for educators, thus making this expert study possible. Furthermore, we would like to thank all educators and education students who participated in our study. We also want to thank Matthias Hürbe for his help with the implementation of XREye, Sonja Karst for providing her expertise on ophthalmology, and Michael Wimmer for supervising the earlier phases of the project. Finally, we thank Renan Guarese for providing the pre-test data as the baseline for our study.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
XRExtended reality
VRVirtual reality
ARAugmented reality
HDRHigh-dynamic range
HWDHead-worn display
VAVisual acuity
AMDAge-related macular degeneration

References

  1. World Health Organization. World Report on Vision; World Health Organization: Geneva, Switzerland, 2019. [Google Scholar]
  2. National Eye Institute (NEI). Eye Conditions and Diseases. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases (accessed on 5 May 2023).
  3. Apple, L.; Apple, M.; Blasch, D. The artificial reduction of visual cues as a means of preparing training programs for low vision clients. Low Vis. Abstr. 1976, 2, 4–6. [Google Scholar]
  4. Bozeman, L.A.; Chang, C.H.S. Central field loss in object-system and low-vision simulation. Bull. Spec. Educ. 2006, 31, 207–220. [Google Scholar]
  5. Banks, D.; McCrindle, R. Visual eye disease simulator. In Proceedings of the 7th ICDVRAT with ArtAbilitation, Maia, Portugal, 8–10 September 2008. [Google Scholar]
  6. Jones, P.R.; Ometto, G. Degraded Reality: Using VR/AR to simulate visual impairments. In Proceedings of the 2018 IEEE Workshop on Aug. and Virt. Realities for Good, Reutlingen, Germany, 18 March 2018; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  7. Aballéa, S.; Tsuchiya, A. Seeing for yourself: Feasibility study towards valuing visual impairment using simulation spectacles. Health Econ. 2007, 16, 537–543. [Google Scholar] [CrossRef] [PubMed]
  8. Krösl, K.; Bauer, D.; Schwärzler, M.; Fuchs, H.; Suter, G.; Wimmer, M. A VR-based user study on the effects of vision impairments on recognition distances of escape-route signs in buildings. Vis. Comput. 2018, 34, 911–923. [Google Scholar] [CrossRef]
  9. Krösl, K.; Elvezio, C.; Hürbe, M.; Karst, S.; Wimmer, M.; Feiner, S. ICthroughVR: Illuminating cataracts through virtual reality. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 655–663. [Google Scholar]
  10. Krösl, K.; Elvezio, C.; Hürbe, M.; Karst, S.; Feiner, S.; Wimmer, M. XREye: Simulating visual impairments in eye-tracked XR. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 830–831. [Google Scholar]
  11. Krösl, K.; Elvezio, C.; Luidolt, L.R.; Hürbe, M.; Karst, S.; Feiner, S.; Wimmer, M. CatARact: Simulating cataracts in augmented reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 682–693. [Google Scholar]
  12. Krösl, K. Simulating Cataracts in Virtual Reality. In Digital Anatomy: Applications of Virtual, Mixed and Augmented Reality; Springer: Berlin/Heidelberg, Germany, 2021; pp. 257–283. [Google Scholar]
  13. Krösl, K. Simulating Vision Impairments in Virtual and Augmented Reality. Ph.D. Thesis, TU Wien, Vienna, Austria, 2020. [Google Scholar]
  14. Werfel, F.; Wiche, R.; Feitsch, J.; Geiger, C. Empathizing audiovisual sense impairments: Interactive real-time illustration of diminished sense perception. In Proceedings of the 7th Augmented Human International Conference, Geneva, Switzerland, 25–27 February 2016; ACM: New York, NY, USA, 2016; p. 15. [Google Scholar]
  15. Choo, K.T.W.; Balan, R.K.; Wee, T.K.; Chauhan, J.; Misra, A.; Lee, Y. Empath-d: Empathetic design for accessibility. In Proceedings of the 18th International Workshop on Mobile Computing Systems and Applications, Sonoma, CA, USA, 21–22 February 2017; pp. 55–60. [Google Scholar]
  16. Kim, W.; Choo, K.T.W.; Lee, Y.; Misra, A.; Balan, R.K. Empath-d: Vr-based empathetic app design for accessibility. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, Munich, Germany, 10–15 June 2018; pp. 123–135. [Google Scholar]
  17. Choo, K.T.W.; Balan, R.K.; Lee, Y. Examining Augmented Virtuality Impairment Simulation for Mobile App Accessibility Design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  18. Zimmerman Low Vision Simulation Kit. 2013. Available online: https://www.lowvisionsimulationkit.com (accessed on 30 October 2020).
  19. Zagar, M.; Baggarly, S. Low vision simulator goggles in pharmacy education. Am. J. Pharm. Educ. 2010, 74, 83. [Google Scholar] [CrossRef] [PubMed]
  20. Wood, J.; Chaparro, A.; Carberry, T.; Chu, B.S. Effect of simulated visual impairment on nighttime driving performance. Optom. Vis. Sci. 2010, 87, 379–386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Hwang, A.D.; Tuccar-Burak, M.; Goldstein, R.; Peli, E. Impact of oncoming headlight glare with cataracts: A pilot study. Front. Psychol. 2018, 9, 164. [Google Scholar] [CrossRef] [PubMed]
  22. Meyer, G.W.; Greenberg, D.P. Color-defective vision and computer graphics displays. IEEE Comput. Graph. Appl. 1988, 8, 28–40. [Google Scholar] [CrossRef]
  23. Brettel, H.; Viénot, F.; Mollon, J.D. Computerized simulation of color appearance for dichromats. JOSA A 1997, 14, 2647–2655. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Viénot, F.; Brettel, H.; Mollon, J.D. Digital video colourmaps for checking the legibility of displays by dichromats. In Color Research & Application: Endorsed by Inter-Society Color Council, the Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, the Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1999; Volume 24, pp. 243–252. [Google Scholar]
  25. Simulate Your Vision For Family, Friends, and Eye Doctors. Available online: https://visionsimulations.com (accessed on 30 October 2019).
  26. Goodman-Deane, J.; Langdon, P.M.; Clarkson, P.J.; Caldwell, N.H.; Sarhan, A.M. Equipping designers by simulating the effects of visual and hearing impairments. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility, Tempe, AZ, USA, 15–17 October 2007; pp. 241–242. [Google Scholar]
  27. Aaron Leventhal. NoCoffee—Vision Simulator for Chrome. 2013. Available online: https://accessgarage.wordpress.com/ (accessed on 30 October 2019).
  28. Lewis, J.; Brown, D.; Cranton, W.; Mason, R. Simulating visual impairments using the Unreal Engine 3 game engine. In Proceedings of the Serious Games and Applications for Health, Braga, Portugal, 16–18 November 2011; IEEE: Piscataway Township, NJ, USA, 2011; pp. 1–8. [Google Scholar]
  29. Lewis, J.; Shires, L.; Brown, D. Development of a visual impairment simulator using the Microsoft XNA Framework. In Proceedings of the 9th International Conference on Disability, Virtual Reality & Associated Technologie, Laval, France, 10–12 September 2012. [Google Scholar]
  30. Väyrynen, J.; Colley, A.; Häkkilä, J. Head mounted display design tool for simulating visual disabilities. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, Rovaniemi, Finland, 12–15 December 2016; ACM: New York, NY, USA, 2016; pp. 69–73. [Google Scholar]
  31. Zavlanou, C.; Lanitis, A. Virtual Reality-Based Simulation of Age-Related Visual Deficiencies: Implementation and Evaluation in the Design Process. In Proceedings of the International Conference on Human Interaction and Emerging Technologies, Nice, France, 22–24 August 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 262–267. [Google Scholar]
  32. Jones, P.R.; Somoskeöy, T.; Chow-Wing-Bom, H.; Crabb, D.P. Seeing other perspectives: Evaluating the use of virtual and augmented reality to simulate visual impairments (OpenVisSim). Npj Digit. Med. 2020, 3, 1–9. [Google Scholar] [CrossRef] [Green Version]
  33. Zhang, Q.; Barbareschi, G.; Huang, Y.; Li, J.; Pai, Y.S.; Ward, J.; Kunze, K. Seeing our Blind Spots: Smart Glasses-based Simulation to Increase Design Students’ Awareness of Visual Impairment. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October– 2 November 2022; pp. 1–14. [Google Scholar]
  34. Novartis Pharma, A.G. ViaOpta Simulator. 2018. Available online: https://www.viaopta-apps.com/ViaOpta-Simulator.html (accessed on 19 September 2019).
  35. Braille Institute of America. VisionSim by Braille Institute. Available online: https://apps.apple.com/us/app/visionsim-by-braille-institute/id525114829 (accessed on 30 October 2019).
  36. Paananen, V.; Kiarostami, M.S.; Lee, L.H.; Braud, T.; Hosio, S. From Digital Media to Empathic Reality: A Systematic Review of Empathy Research in Extended Reality Environments. arXiv 2022, arXiv:2203.01375. [Google Scholar]
  37. Escalas, J.E.; Stern, B.B. Sympathy and Empathy: Emotional Responses to Advertising Dramas. J. Consum. Res. 2003, 566. [Google Scholar] [CrossRef]
  38. Bevan, C.; Green, D.P.; Farmer, H.; Rose, M.; Cater, K.; Fraser, D.S.; Brown, H. Behind the Curtain of the “Ultimate Empathy Machine": On the Composition of Virtual Reality Nonfiction Experiences; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef] [Green Version]
  39. Campbell, D.; Lugger, S.; Sigler, G.S.; Turkelson, C. Increasing awareness, sensitivity, and empathy for Alzheimer’s dementia patients using simulation. Nurse Educ. Today 2021, 98, 104764. [Google Scholar] [CrossRef] [PubMed]
  40. Adefila, A.; Graham, S.; Clouder, L.; Bluteau, P.; Ball, S. myShoes—The future of experiential dementia training? J. Ment. Health Train. Educ. Pract. 2016, 11, 91–101. [Google Scholar] [CrossRef]
  41. Oh, S.Y.; Bailenson, J.; Weisz, E.; Zaki, J. Virtually old: Embodied perspective taking and the reduction of ageism under threat. Comput. Hum. Behav. 2016, 60, 398–410. [Google Scholar] [CrossRef]
  42. Guarese, R.; Pretty, E.; Fayek, H.; Zambetta, F.; van Schyndel, R. Evoking empathy with visually impaired people through an augmented reality embodiment experience. In Proceedings of the 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), Shanghai, China, 25–29 March 2023. [Google Scholar]
  43. VRVis Research Center. HILITE. Available online: http://www.vrvis.at/projects/hilite (accessed on 10 May 2019).
  44. National Eye Institute (NEI). Refractive Errors. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/refractive-errors (accessed on 18 December 2019).
  45. National Eye Institute (NEI). Corneal Conditions. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/corneal-conditions (accessed on 4 May 2019).
  46. Shoemaker, J.A. Vision Problems in the US: Prevalence of Adult Vision Impairment and Age-Related Eye Disease in America; National Eye Institute: Bethesda, MD, USA, 2002. [Google Scholar]
  47. Chung, S.T.; Legge, G.E. Comparing the shape of contrast sensitivity functions for normal and low vision. Investig. Ophthalmol. Vis. Sci. 2016, 57, 198–207. [Google Scholar] [CrossRef] [PubMed]
  48. Thévin, L.; Machulla, T. Three Common Misconceptions about Visual Impairments. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020. [Google Scholar]
  49. Austria, S. Teaching Staff. 2022. Available online: https://www.statistik.at/en/statistics/population-and-society/education/teaching-staff (accessed on 10 May 2019).
Figure 1. Side-by-side view of the unmodified VR view (left eye) and simulated myopia (right eye).
Figure 1. Side-by-side view of the unmodified VR view (left eye) and simulated myopia (right eye).
Mti 07 00070 g001
Figure 2. Side-by-side view of the unmodified 360 ° image view (left eye) and simulated cornea disease (right eye).
Figure 2. Side-by-side view of the unmodified 360 ° image view (left eye) and simulated cornea disease (right eye).
Mti 07 00070 g002
Figure 3. Side-by-side view of the unmodified AR view (left eye) and simulated wet AMD (right eye).
Figure 3. Side-by-side view of the unmodified AR view (left eye) and simulated wet AMD (right eye).
Mti 07 00070 g003
Figure 4. Side-by-side view of the unmodified 360° image view (left eye) and simulated complete achromatopsia (right eye).
Figure 4. Side-by-side view of the unmodified 360° image view (left eye) and simulated complete achromatopsia (right eye).
Mti 07 00070 g004
Figure 5. Setting of the user study.
Figure 5. Setting of the user study.
Mti 07 00070 g005
Figure 6. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted sympathy statements ARS-2 (Item 8 in expert study questionnaire, see Table 1) “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks”. Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).
Figure 6. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted sympathy statements ARS-2 (Item 8 in expert study questionnaire, see Table 1) “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks”. Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).
Mti 07 00070 g006
Figure 7. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted sympathy statements ARS-5 (Item 9 in expert study questionnaire, see Table 1) “I was able to recognize the problems that blind and visually impaired people have [].” Results are aligned by positive ratings, and include the number of respondents (N), median (M), and standard deviation (ST).
Figure 7. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted sympathy statements ARS-5 (Item 9 in expert study questionnaire, see Table 1) “I was able to recognize the problems that blind and visually impaired people have [].” Results are aligned by positive ratings, and include the number of respondents (N), median (M), and standard deviation (ST).
Mti 07 00070 g007
Figure 8. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted empathy statements ARE-3 (Item 10 in expert study questionnaire, see Table 1) “[] I felt as though I had a visual impairment.” Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).
Figure 8. Comparison of responses of Guarese et al. [42] pre-test participants, spectators, and users to adapted empathy statements ARE-3 (Item 10 in expert study questionnaire, see Table 1) “[] I felt as though I had a visual impairment.” Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).
Mti 07 00070 g008
Figure 9. Boxplots of distributions of Likert-scale ratings from users, spectators, and Guarese et al. [42] pre-test participants for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks.”, 9: “I was able to recognize the problems that blind and visually impaired people have []”, 10: “[] I felt as though I had a visual impairment.”, see Table 1 for full statements.)
Figure 9. Boxplots of distributions of Likert-scale ratings from users, spectators, and Guarese et al. [42] pre-test participants for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks.”, 9: “I was able to recognize the problems that blind and visually impaired people have []”, 10: “[] I felt as though I had a visual impairment.”, see Table 1 for full statements.)
Mti 07 00070 g009
Figure 10. Comparison of responses from all expert-study participants, spectators, and users to Item 11 (regarding the pedagogical value, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Figure 10. Comparison of responses from all expert-study participants, spectators, and users to Item 11 (regarding the pedagogical value, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Mti 07 00070 g010
Figure 11. Comparison of responses from all expert-study participants, spectators, and users to Item 12 (regarding classroom use, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Figure 11. Comparison of responses from all expert-study participants, spectators, and users to Item 12 (regarding classroom use, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Mti 07 00070 g011
Figure 12. Comparison of responses from all expert-study participants, spectators, and users to Item 13 (regarding remote use, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Figure 12. Comparison of responses from all expert-study participants, spectators, and users to Item 13 (regarding remote use, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Mti 07 00070 g012
Figure 13. Comparison of responses from all expert-study participants, spectators, and users to Item 15 (regarding adapting teaching methods, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Figure 13. Comparison of responses from all expert-study participants, spectators, and users to Item 15 (regarding adapting teaching methods, see Table 1), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).
Mti 07 00070 g013
Figure 14. Boxplots of distributions of Likert-scale ratings from users and spectators for questionnaire items 11–13 (11: pedagogical value, 12: classroom use, 13: remote use) and 15 (adapting teaching methods).
Figure 14. Boxplots of distributions of Likert-scale ratings from users and spectators for questionnaire items 11–13 (11: pedagogical value, 12: classroom use, 13: remote use) and 15 (adapting teaching methods).
Mti 07 00070 g014
Table 1. Questionnaire items used for quantitative analyses in our expert study. Sympathy and empathy items are taken from Guarese et al. [42], adapted from Escalas and Stern [37]. Items 11–15 are our own questionnaire items, used in our expert study.
Table 1. Questionnaire items used for quantitative analyses in our expert study. Sympathy and empathy items are taken from Guarese et al. [42], adapted from Escalas and Stern [37]. Items 11–15 are our own questionnaire items, used in our expert study.
Item #TopicQuestionnaire Item
8Sympathy (ARS-2)
Expert Study
(Adapted ARS III)
Based on what was happening in the experiment, I understood what is bothering blind and visually impaired people in their day-to-day tasks.
Guarese et al. [42] pre-test
(Adapted ARS II)
Based on what was happening in my experiences, I understood what was bothering blind and visually impaired people in their day-to-day tasks.
9Sympathy (ARS-5)
Expert Study
(Adapted ARS III)
I was able to recognize the problems that blind and visually impaired people have in their day-to-day tasks.
Guarese et al. [42] pre-test
(Adapted ARS II)
I was able to recognize the problems that blind and visually impaired people had in my experiences with them.
10Empathy (ARE-3)
Expert Study
(Adapted ARS III)
While experiencing the simulation, I felt as though I had a visual impairment.
Guarese et al. [42] pre-test
(Adapted ARS II)
While perceiving the day-to-day tasks of blind and visually impaired people, I felt as though those events were happening to me.
New Expert Study Items
11Pedagogical
Value
I think that there is pedagogical value in the XREye simulation and similar technologies.
12Classroom
Use
I would use XREye or similar technologies to help teach students in a classroom setting about the lived experiences of individuals with vision impairments.
13Remote
Use
I would use XREye or similar technologies to help teach students in a remote-learning setting (students using it at home) about the lived experiences of individuals with vision impairments.
15Adapt
Teaching
After the experiment, in the case of having a blind or visually impaired student, would you consider adapting your teaching methods or using assistive technologies?
Table 2. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users or spectators to ratings from Guarese et al. [42] pre-test participants (PCPs) for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks,” 9: “I was able to recognize the problems that blind and visually impaired people have [],” 10: “[] I felt as though I had a visual impairment.” see Table 1 for full statements.) All values are rounded to 3 decimal places.
Table 2. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users or spectators to ratings from Guarese et al. [42] pre-test participants (PCPs) for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks,” 9: “I was able to recognize the problems that blind and visually impaired people have [],” 10: “[] I felt as though I had a visual impairment.” see Table 1 for full statements.) All values are rounded to 3 decimal places.
Item #Users vs.
Guarese et al.
Pre-Test PCPs
Effect
Size
PowerSpectators vs.
Guarese et al.
Pre-Test PCPs
Effect
Size
Power
8 (Sympathy)
(ARS-2)
p <  0.0010.6900.999p < 0.0010.5600.992
9 (Sympathy)
(ARS-5)
p < 0.0010.4650.878 p 0.0010.4000.824
10 (Empathy)
(ARE-3)
p < 0.0010.8011.000 p 0.0010.4160.811
Table 3. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users to ratings from spectators for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks,” 9: ”I was able to recognize the problems that blind and visually impaired people have [],” 10: “[] I felt as though I had a visual impairment,” see Table 1 for full statements.) All values are rounded to 3 decimal places.
Table 3. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users to ratings from spectators for questionnaire items 8–10. (8: “[] I understood what is bothering blind and visually impaired people in their day-to-day tasks,” 9: ”I was able to recognize the problems that blind and visually impaired people have [],” 10: “[] I felt as though I had a visual impairment,” see Table 1 for full statements.) All values are rounded to 3 decimal places.
Item #Users vs.
Spectators
Effect
Size
Power
8 (Sympathy (ARS-2)) p 0.1800.2030.095
9 (Sympathy (ARS-5)) p 0.5370.0940.021
10 (Empathy (ARE-3))p <  0.0010.6020.923
Table 4. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users to ratings from spectators for questionnaire items 11–13 (11: pedagogical value, 12: classroom use, 13: remote use) and 15 (adapting teaching methods). All values are rounded to 3 decimal places.
Table 4. Results of Mann–Whitney U tests (using Bonferroni correction), comparing seven-point Likert-scale ratings from users to ratings from spectators for questionnaire items 11–13 (11: pedagogical value, 12: classroom use, 13: remote use) and 15 (adapting teaching methods). All values are rounded to 3 decimal places.
Item #Users vs.
Spectators
Effect
Size
Power
11 (Pedagogical Value) p 0.0340.3320.301
12 (Classroom Use) p 0.961 0.009 0.009
13 (Remote Use) p 0.2430.1780.069
15 (Adapt Teaching) p 0.2500.1910.072
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Krösl, K.; Medeiros, M.L.; Huber, M.; Feiner, S.; Elvezio, C. Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye. Multimodal Technol. Interact. 2023, 7, 70. https://doi.org/10.3390/mti7070070

AMA Style

Krösl K, Medeiros ML, Huber M, Feiner S, Elvezio C. Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye. Multimodal Technologies and Interaction. 2023; 7(7):70. https://doi.org/10.3390/mti7070070

Chicago/Turabian Style

Krösl, Katharina, Marina Lima Medeiros, Marlene Huber, Steven Feiner, and Carmine Elvezio. 2023. "Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye" Multimodal Technologies and Interaction 7, no. 7: 70. https://doi.org/10.3390/mti7070070

Article Metrics

Back to TopTop