Next Article in Journal
Multi-Classification of Lung Infections Using Improved Stacking Convolution Neural Network
Previous Article in Journal
A Monotonic Early Output Asynchronous Full Adder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Performance and Preference of Visually Impaired Individuals in Object Localization: Tactile, Verbal, and Sonification Cueing Modalities

1
Department of Occupational Therapy, Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
2
Department of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
*
Author to whom correspondence should be addressed.
Technologies 2023, 11(5), 127; https://doi.org/10.3390/technologies11050127
Submission received: 17 August 2023 / Revised: 29 August 2023 / Accepted: 15 September 2023 / Published: 16 September 2023
(This article belongs to the Topic Smart Healthcare: Technologies and Applications)

Abstract

:
Audio guidance is a common means of helping visually impaired individuals to navigate, thereby increasing their independence. However, the differences between different guidance modalities for locating objects in 3D space have yet to be investigated. The aim of this study was to compare the time, the hand’s path length, and the satisfaction levels of visually impaired individuals using three automatic cueing modalities: pitch sonification, verbal, and vibration. We recruited 30 visually impaired individuals (11 women, average age 39.6 ± 15.0), who were asked to locate a small cube, guided by one of three cueing modalities: sonification (a continuous beep that increases in frequency as the hand approaches the cube), verbal prompting (“right”, “forward”, etc.), and vibration (via five motors, attached to different locations on the hand). The three cueing modalities were automatically activated by computerized motion capture systems. The subjects separately answered satisfaction questions for each cueing modality. The main finding was that the time to find the cube was longer using the sonification cueing (p = 0.016). There were no significant differences in the hand path length or the subjects’ satisfaction. It can be concluded that verbal guidance may be the most effective for guiding people with visual impairment to locate an object in a 3D space.

1. Introduction

In our modern era of advanced technology, the integration of intelligent features within domestic and professional settings, as exemplified by initiatives like the GiraffPlus project [1], has ushered in capabilities that extend beyond mere image capture. These capabilities encompass fall detection, gesture recognition [2], and activity identification (standing, sitting, walking, and so on) [3]. This technological advancement holds promise for enhancing the functional independence of individuals with disabilities [4], a prospect that assumes a heightened significance, given the escalating prevalence of moderate-to-severe visual impairments and blindness [5]. Visual impairment predicts both accelerated deterioration in physical functioning and an increased mortality risk, particularly among severely visually impaired adults [6]. This demographic reality underscores the pressing need for home and/or office interventions tailored to task facilitation and safety provisioning. Within this context, the concept of object localization within smart environments emerges as a pivotal avenue to amplify the self-sufficiency of visually impaired individuals. This entails the use of contemporary technology to orchestrate tactile or auditory cues that guide individuals toward objects of interest such as a television remote control, a mobile device, or house keys. This technological capability also extends to addressing safety concerns such as locating a wandering toddler. Although strides have been made in developing object-detection algorithms, by utilizing both regular and 3D cameras [7,8], the optimal modality for facilitating such guidance remains an open question. In instances where visual cues are not feasible, auditory or tactile cues, integrated within cognitive learning paradigms, emerge as a viable alternative [9,10]. While prior research demonstrates the proficiency of visually impaired individuals in localizing auditory cues [11], the practicality of incorporating a distinct auditory source within every household item remains a challenge. Consequently, there exists an imperative to explore an auditory feedback modality that embodies not only intuitiveness but also efficacy in guiding the localization of objects within a three-dimensional spatial environment.
The existing literature underscores the role of feedback mechanisms—encompassing verbal instructions, sonification (conveying information through sound), and tactile cues—in aiding navigation for individuals with visual impairments. For instance, in the study by Bharadwaj et al. [12], a waist-worn vibratory interface was compared against conventional auditory directives, revealing that tactile cues are particularly effective in noisy environments. Delogu et al. [13] extended this understanding by employing sonification based on geographical locations, highlighting that spatial representation is not confined to the visual modality. Similarly, an inquiry by another study [14] delved into real-time scene sonification for individuals with visual impairment, by comparing various modes such as image sonification, obstacle sonification, and path sonification. Their findings underscore the value of high-level scene information for effective navigation and learning efficiency, while acknowledging the challenge of reconciling comprehensive scene details with navigational speed. Despite these insights into the potential of tactile and auditory cues to enhance navigational proficiency, the efficacy of these cueing modalities for localizing objects within a three-dimensional context remains unexplored.
The present study endeavors to address this gap by undertaking a comprehensive investigation into the temporal and spatial dimensions of performance among visually impaired participants. Specifically, we aim to compare the efficacy of three automatic cueing modalities—verbal instructions, pitch-based sonification, and tactile vibrations—during an object localization task within a three-dimensional spatial domain. Our analysis focuses on the following factors: the time to locate an object, the path traversed by the hand until reaching the object, and the user satisfaction from each of the three cueing modalities. The results of this study help to shed light on the most effective mode of guidance for object localization in a three-dimensional environment. This initiative aims to encourage the development of cutting-edge technologies designed to assist visually impaired individuals in their daily navigation and location-based tasks.

2. Materials and Methods

2.1. Participants

We recruited 30 adults with visual impairment, using a convenient and snowball sampling approach. Inclusion criteria were based on the Ministry of Welfare’s guidelines for visual impairment, which encompasses individuals with total blindness, visual acuity of 3/60 m or worse in the better eye even with corrective eyewear, and/or a visual field of less than 20°. Participants were also required to possess normal or corrected hearing. Exclusion criteria included the presence of neurological or orthopedic conditions that could impact the movement of the dominant hand. Ethical approval for the study was obtained prior to commencement from the Ethics Committee of Tel Aviv University.

2.2. Tools

The cueing modalities (verbal, pitch sonification, and vibration) were provided using the following tools:
  • For the verbal and pitch sonification, a motion capture system with six infrared cameras (Qualisys Medical AB, Göteborg, Sweden) was calibrated according to the manual of the manufacturer. The motion tracking system automatically identified 4 passive reflective markers, placed on a small 3D-printed box (2.5 cm in length, width, and height with 4 markers attached to a base below it; Figure 1a), and 4 additional markers, placed on a cluster attached to the back of the subject’s hand (Figure 1a). The system streamed the 3D coordinates of these markers in real time, at 100 Hz, to custom LabView software (V2019, National Instruments, Austin TX, USA). The code was used to calculate the position of the box in 3D space in regard to the hand in real-time and provide auditory feedback. The distance considered for the feedback was the minimal distance found between a marker on the hand cluster and a marker on the box. Two auditory cues were configured: verbal cueing of the words “left”, “right”, “up”, “down”, “forward”, and “back”, in the English language, which was the second language of all of our participants; and pitch sonification: an audible continuous sound, for which pitch was increased or decreased when the subject’s hand moved closer to or away from the box, respectively.
  • For the vibration feedback, a Leap Motion sensor (Motion Control, San Francisco, CA, USA) was used to track the right hand of the subject. The hand’s coordinates were streamed to a custom processing code, used to extract the coordinates of the distal section of the 3rd finger. The coordinates of the box were pre-entered into the aforementioned processing code, which calculated the distance between the hand and the box and decided which vibration motor should be activated. The command to activate a certain motor was send via Bluetooth to an Arduino Micro (with a Bluetooth shield), which was placed in a 3D-printed box that was strapped to the subject’s forearm (Figure 1b). Five vibration motors (shaftless vibration motor, 10 × 2.0 mm; Pololu, Vegas, NV, USA) were connected to the Arduino and taped to skin of the subject’s hand and wrist, according to the locations depicted in Figure 1b.
For each cueing modality, the box was positioned at varying locations, situated 50 cm away from the initial hand placement (Figure 2).
A post-experience subjective questionnaire, evaluating user satisfaction with each cueing modality, was administered using a Likert scale. Immediately following exposure to each cueing modality, participants were prompted to rate two specific aspects: firstly, the effectiveness of the cueing modality in aiding them to track the target box and, secondly, their overall satisfaction with the assistance provided by the cueing modality. Responses were rated on a scale spanning from “1”—indicating “not at all”—to “5”—denoting “very much so”. Additionally, an avenue for qualitative commentary was provided to allow participants to convey any additional insights or feedback.

2.3. Procedure

The participants were randomly assigned to three groups (N = 10 per group). Each group experienced the cues provided in a varied sequence to ensure randomness. Seated comfortably on a chair, every participant faced a table where the box was situated. A comprehensive demonstration of the cueing modalities was conducted by the researcher. This entailed the researcher orchestrating the subject’s hand movements toward and away from the target box, placed at different locations and heights (but maintaining the 50 cm aerial distance from the initial position of the hand), while auditory or tactile cues were concurrently activated. During this demonstration phase, the researcher elucidated the significance of the cues, meticulously explicating their relevance and purpose in tandem with the hand’s motion. This step was taken to foster familiarity and understanding among the participants. Following the demonstration, the participants were instructed to place their right hand at the designated starting point, defined by three distinct stickers (Figure 2). Subsequently, prompted by a verbal “go” from the researcher, the participants embarked on the task of locating the box. The trial was iterated three times, corresponding to each cueing modality. For each modality, the box’s location was altered among the three positions depicted in Figure 2 (maintaining 50 cm from the starting position of the hand). After the culmination of each trial, the participants rated their satisfaction with the respective cueing modality.

2.4. Post Processing

The time to find the box and the hand’s travel path length were analyzed for each of the three cueing modalities. The Friedman test was used to compare the outcome measures between the three cueing modalities. We used the Wilcoxin signed ranks test as post hoc. The effect size, r, was calculated using the following equation [15]:
r = Z N
Statistical significance is set to p < 0.05. Unfortunately, we encountered technical problems saving the coordinates of the hand during the vibration cueing trials, so, for this trial, only the times to complete the task were calculated.

3. Results

Thirty participants were recruited (nineteen males and eleven females; mean and SD age of 39.6 ± 15.0 years). Thirteen (43.3%) participants in the study population had full blindness, thirteen (43.3%) had vision below 60\3, one (3.3%) had vision below 61\3, one (3.3%) had vision below 62\3, and two (6.6%) were blind in one eye with severely reduced vision in the second.
Statistically significant differences were observed in the time it took the subjects to complete the task among the three cueing modalities (p = 0.034; Figure 3a). A subsequent post hoc analysis revealed a statistically significant prolonged duration for locating the box using pitch sonification compared to verbal cueing (p = 0.016; r = −0.323). Conversely, no statistically significant differences were found in the hand path lengths between verbal and pitch sonification cueing (p = 0.082; r = −0.317), although a trend toward a lengthier path was noted with pitch sonification (Figure 3b).
No statistically significant differences were detected in the user’s satisfaction questionnaires. The level of assistance provided by the cueing modalities received median ratings, along with interquartile ranges of 4.5 (1) for pitch sonification, 5.0 (1) for verbal cues, and 4.5 (2) for vibration cues (p = 0.928). Similarly, satisfaction levels with the cueing modalities received median ratings, along with interquartile ranges of 4.0 (2) for pitch sonification, 4.0 (2) for verbal cues, and 3.5 (2) for vibration cues (p = 0.302). Regarding pitch sonification, participants primarily expressed concern that it lacked directional guidance for the object, solely focusing on hand–box distance. In the case of verbal cueing, the predominant complaint pertained to its delivery in a non-native language for the subjects. Lastly, concerning vibration cueing, participants found it challenging to discern the active motor while their hand was in motion, necessitating high concentration levels.
Additional statistical analyses were conducted to explore potential performance differences between genders. No statistically significant differences were observed in task completion times or hand travel path lengths across each cueing modality among the 11 female and 19 male participants. However, a trend can be seen, indicating a tendency toward shorter hand travel path lengths for male participants compared to females when guided by the verbal cueing modality (p = 0.072; r = −0.328; Figure 4). Also, worth noting, females showed higher diversity in their hand travel path lengths compared to men (Figure 4).
We conducted further correlation analyses between participants’ age and their performance, revealing no significant correlations (p-values ranging from 0.102 to 0.831).

4. Discussion

In this study, we conducted a comparative analysis of time, hand path length, and user satisfaction during cube localization using three distinct cueing methods (sonification, verbal, and vibration). While no notable differences emerged in the hand path length or satisfaction levels, the pivotal significance lies in the time discrepancy. Specifically, the employment of the verbal cueing modality yielded a shorter localization time, underscoring its pivotal role in designing navigation aids for the visually impaired.
Verbal guidance was found to be the most effective cueing modality in terms of the time to locate the object. The time difference when locating objects is a crucial factor to consider when designing navigation aids for visually impaired individuals due to several key reasons. First, locating commonly used objects, e.g., the air conditioner’s remote control or the house keys, is an essential aspect of daily living. Minimizing the time it takes to find these items, through effective cueing modalities, directly contributes to the convenience and efficiency of visually impaired individuals’ everyday routines. Furthermore, swift object localization streamlines routine tasks such as turning on the air conditioner or unlocking a door. Reduced search times enhance the speed and efficiency with which visually impaired individuals can complete these tasks, ultimately improving their overall quality of life by enabling them to independently and quickly carry out daily activities. This also helps in reducing their reliance on external assistance [16]. Since prolonged search times for frequently used items may lead to frustration and stress, navigation aids that minimize search times help mitigate these negative emotions, contributing to a more positive and satisfying user experience as well as improved mental well-being [17].
The shorter time it took to find the box using the verbal cueing modality might be explained by various factors such as the cognitive load, auditory processing, and familiarity of the language to visually impaired individuals. We have a natural ability to interpret and follow verbal instructions. These factors might make a larger contribution to the successful use of auditory cues in the visually impaired population. There exists a body of empirical evidence indicating that signal perception and processing mechanisms in visually impaired individuals, particularly those who experienced early-onset blindness, exhibit discernible deviations from those of their sighted counterparts [18]. Markedly, these individuals, besides manifesting an elevated capacity for perceptual auditory processing, were observed to demonstrate notable competencies in higher-order cognitive functions, encompassing domains such as musical aptitude, linguistic proficiency, and memory skills [18]. A behavioral–electrophysiological study that compared auditory memory in congenitally blind adults and matched sighted controls concluded that the former group more efficiently encodes auditory verbal material [19].
Contrarily to the positive effectiveness of verbal cueing, sonification poses a few challenges, mainly due to the lack of crucial information about the direction. Also, the continuous beeping might be annoying, and the cognitive workload required to convert the beeping sound into spatial information might contribute to the observed delay, as suggested by [20]. While target sonification might prove advantageous for sighted individuals, particularly in scenarios involving intricate visual guidance such as surgery [21], for the visually impaired population, sonification may present challenges due to the inherent need for comprehensive auditory cues and efficient cognitive processing. Hence, pitch sonification is the least advisable cueing modality for object localization among the three assessed in this study.
The third cueing modality introduced in this study was tactile vibration. In scenarios where auditory cues might be hindered by a noisy environment or compete for auditory attention among visually impaired individuals, tactile cues offer a viable alternative. Vibration, as a tactile cueing mechanism, can be engaged via a singular motor, as found in mobile devices (used, for example, by Google Maps, alerting the pedestrian that a turn is imminent), or it can be implemented through multiple motors distributed across the body or limb, as demonstrated in the present study. Moreover, the activation of vibration cues can encompass diverse patterns that require user differentiation. However, it is important to acknowledge that this complexity in cue discrimination may potentially augment the cognitive workload, as observed in related studies, e.g., [22]. Another concern is the attachment of the vibration motors on the body, which might have negatively affected the participants’ ability to accurately interpret the cues. It is possible that adjustments in the placement of the motors could potentially enhance the effectiveness of vibration cues, for example, placing them in different locations on the body (on different limbs or a hip-worn belt, as in [12]). In summary, the utilization of vibration-based cueing exhibits both advantageous and disadvantageous aspects. However, with respect to our empirical observations, its supremacy over verbal cueing remains inconclusive, owing to the varying levels of proficiency exhibited by different participants.
Although there were no statistically significant differences between men and woman, we found a trend toward shorter hand travel path lengths for men compared to women when assisted by the verbal cueing modality. Sex-related differences are complex and multifaceted, often arising from a combination of biological, psychological, and sociocultural factors. In our study, possible differences may be attributed to physical characteristics, e.g., arm length and differences in motor control and coordination during reaching [23,24]. Additionally, men and women might exhibit variations in responsiveness to different types of cues, as related to factors such as attention or reaction time [25,26].
While this study offers valuable insights, it is important to acknowledge certain limitations. Notably, the sample size was relatively small, which may influence the generalizability of the findings. However, it is worth noting that the moderate effect size associated with the primary outcome highlights a discernible distinction among the modalities, suggesting potential practical relevance despite the sample-size constraint. Additionally, the experimental tests were conducted within a controlled laboratory setting. Consequently, the ecological validity of the findings in real-world scenarios might be subject to variation. Furthermore, the placement of the target box at different positions within the subjects’ reachable area could introduce a degree of variability that may impact the robustness of the results. Lastly, our results should not be applied to cues provided for the localization of moving objects, as the perception of speed in individuals with visual impairments might be compromised [27]. While these limitations warrant consideration, the study’s outcomes remain instructive and pave the way for future investigations to expand upon these findings in more diverse and ecologically valid contexts.

5. Conclusions and Future Directions

We compared the efficacy of three automatic cueing modalities—verbal instructions, pitch-based sonification, and tactile vibrations—during an object localization task within a three-dimensional spatial domain. Our results suggest that verbal cueing is the optimal modality that reduces the localization time of an object. We believe that when designing navigation aids for visually impaired individuals to locate specific objects, the time difference in object localization remains a crucial factor. Swift and efficient object localization directly contributes to everyday convenience, task efficiency, user autonomy, and overall well-being. By prioritizing minimized search times, designers can create navigation aids that empower visually impaired individuals to efficiently manage their environment, interact with others, and complete tasks with greater ease and independence. Future studies might consider the potential benefits of combining audio guidance and vibration feedback, albeit with awareness of the intricacies of sensory integration among visually impaired individuals. Individuals with visual impairments rely on their other senses, e.g., hearing and touch, to gather information about their environment. However, it is crucial to be aware that these senses might not work the same way in everyone, and individuals may have different abilities to effectively integrate sensory information. Notably, prior research suggests that there is attenuated multisensory spatial integration in this population, underscoring the need for a nuanced approach when investigating the synergies among these modalities [28].
Anticipating a proximate future, we envision an interconnected environment encompassing residential, occupational, and public spaces that is capable of discerning and acknowledging multiple objects within its domain. This envisioned environment would be responsive to vocalized inquiries, exemplified by interactions such as “Greetings, abode. Could you assist me in locating my domicile keys?” Through a comprehensive framework, the system would adeptly assimilate the geographical coordinates of the querent alongside the designated whereabouts of the object, thereby initiating preliminary guidance. For instance, an informative prompt such as “The keys have been identified within the kitchen precinct, on the counter” would be disseminated. Subsequently, the system would engage in continuous monitoring of the individual’s locomotion. Upon the individual’s convergence with the spatial vicinity harboring the sought-after item, a refined and contextually tailored cueing mechanism would ensue. This specialized guidance would culminate in the orchestration of precise manual movements, orchestrating the seamless alignment of the user’s hand with the discreet location of the designated object.

Author Contributions

Conceptualization, S.P.; methodology, O.C., E.B. and S.P.; software, O.C., E.B. and S.P.; validation, O.C., E.B. and S.P.; formal analysis, O.C., E.B. and S.P.; investigation, S.P.; resources, S.P.; data curation, S.A.R., O.C. and E.B.; writing—original draft preparation, S.P.; writing—review and editing, S.P.; visualization, S.P.; supervision, S.P.; project administration, S.A.R.; funding acquisition, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was fully funded by the Lior Sima Medical Research Fund.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Tel Aviv University.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The SPSS file with the data used in this study can be found at Portnoy, Sigal (2023), “Visually impaired paper”, Mendeley Data, V1, doi: 10.17632/75cz4xmzdv.1.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Palumbo, F.; Ullberg, J.; Štimec, A.; Furfari, F.; Karlsson, L.; Coradeschi, S. Sensor Network Infrastructure for a Home Care Monitoring System. Sensors 2014, 14, 3833–3860. [Google Scholar] [CrossRef] [PubMed]
  2. Choi, E.; Kwon, S.; Lee, D.; Lee, H.; Chung, M.K. Towards Successful User Interaction with Systems: Focusing on User-Derived Gestures for Smart Home Systems. Appl. Ergon. 2014, 45, 1196–1207. [Google Scholar] [CrossRef] [PubMed]
  3. Krishnan, N.C.; Cook, D.J. Activity Recognition on Streaming Sensor Data. Pervasive Mob. Comput. 2014, 10, 138–154. [Google Scholar] [CrossRef] [PubMed]
  4. Ocepek, J.; Roberts, A.E.; Vidmar, G. Evaluation of Treatment in the Smart Home IRIS in Terms of Functional Independence and Occupational Performance and Satisfaction. Comput. Math. Methods Med. 2013, 2013, 926858. [Google Scholar] [CrossRef] [PubMed]
  5. Stevens, G.A.; White, R.A.; Flaxman, S.R.; Price, H.; Jonas, J.B.; Keeffe, J.; Leasher, J.; Naidoo, K.; Pesudovs, K.; Resnikoff, S.; et al. Global Prevalence of Vision Impairment and Blindness: Magnitude and Temporal Trends, 1990–2010. Ophthalmology 2013, 120, 2377–2384. [Google Scholar] [CrossRef]
  6. Verbeek, E.; Drewes, Y.; Gussekloo, J. Visual Impairment as a Predictor for Deterioration in Functioning: The Leiden 85-plus Study. BMC Geriatr. 2022, 22, 397. [Google Scholar] [CrossRef]
  7. Göncz, L.; Majdik, A.L. Object-Based Change Detection Algorithm with a Spatial AI Stereo Camera. Sensors 2022, 22, 6342. [Google Scholar] [CrossRef]
  8. Ryu, H.W.; Tai, J.H. Object Detection and Tracking Using a High-Performance Artificial Intelligence-Based 3D Depth Camera: Towards Early Detection of African Swine Fever. J. Vet. Sci. 2022, 23, e17. [Google Scholar] [CrossRef]
  9. Oscari, F.; Secoli, R.; Avanzini, F.; Rosati, G.; Reinkensmeyer, D.J. Substituting Auditory for Visual Feedback to Adapt to Altered Dynamic and Kinematic Environments during Reaching. Exp. Brain Res. 2012, 221, 33–41. [Google Scholar] [CrossRef]
  10. Lombera, E.N.; Etchemendy, P.E.; Spiousas, I.; Vergara, R.O. Auditory Distance Perception by Blind and Sighted Participants for Both Within- and beyond-Reach Sources. J. Exp. Psychol. Hum. Percept. Perform. 2022, 48, 467–480. [Google Scholar] [CrossRef]
  11. Finocchietti, S.; Esposito, D.; Gori, M. Monaural Auditory Spatial Abilities in Early Blind Individuals. Iperception 2023, 14, 20416695221149638. [Google Scholar] [CrossRef] [PubMed]
  12. Bharadwaj, A.; Shaw, S.B.; Goldreich, D. Comparing Tactile to Auditory Guidance for Blind Individuals. Front. Hum. Neurosci. 2019, 13, 443. [Google Scholar] [CrossRef] [PubMed]
  13. Delogu, F.; Palmiero, M.; Federici, S.; Plaisant, C.; Zhao, H.; Belardinelli, O. Non-Visual Exploration of Geographic Maps: Does Sonification Help? Disabil. Rehabil. Assist. Technol. 2010, 5, 164–174. [Google Scholar] [CrossRef]
  14. Hu, W.; Wang, K.; Yang, K.; Cheng, R.; Ye, Y.; Sun, L.; Xu, Z. A Comparative Study in Real-Time Scene Sonification for Visually Impaired People. Sensors 2020, 20, 3222. [Google Scholar] [CrossRef]
  15. Fritz, C.O.; Morris, P.E.; Richler, J.J. Effect Size Estimates: Current Use, Calculations, and Interpretation. J. Exp. Psychol. Gen. 2012, 141, 2–18. [Google Scholar] [CrossRef]
  16. Bilal Salih, H.E.; Takeda, K.; Kobayashi, H.; Kakizawa, T.; Kawamoto, M.; Zempo, K. Use of Auditory Cues and Other Strategies as Sources of Spatial Information for People with Visual Impairment When Navigating Unfamiliar Environments. Int. J. Environ. Res. Public Health 2022, 19, 3151. [Google Scholar] [CrossRef]
  17. Ribeiro, M.V.M.R.; Hasten-Reiter, H.N.; Ribeiro, E.A.N.; Jucá, M.J.; Barbosa, F.T.; de Sousa-Rodrigues, C.F. Association between Visual Impairment and Depression in the Elderly: A Systematic Review. Arq. Bras. Oftalmol. 2015, 78, 197–201. [Google Scholar] [CrossRef]
  18. Sabourin, C.J.; Merrikhi, Y.; Lomber, S.G. Do Blind People Hear Better? Trends Cogn. Sci. 2022, 26, 999–1012. [Google Scholar] [CrossRef]
  19. Röder, B.; Rösler, F.; Neville, H.J. Auditory Memory in Congenitally Blind Adults: A Behavioral-Electrophysiological Investigation. Cogn. Brain Res. 2001, 11, 289–303. [Google Scholar] [CrossRef] [PubMed]
  20. Bilalpur, M.; Kankanhalli, M.; Winkler, S.; Subramanian, R. EEG-Based Evaluation of Cognitive Workload Induced by Acoustic Parameters for Data Sonification. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; pp. 315–323. [Google Scholar] [CrossRef]
  21. Matinfar, S.; Salehi, M.; Suter, D.; Seibold, M.; Dehghani, S.; Navab, N.; Wanivenhaus, F.; Fürnstahl, P.; Farshad, M.; Navab, N. Sonification as a Reliable Alternative to Conventional Visual Surgical Navigation. Sci. Rep. 2023, 13, 5930. [Google Scholar] [CrossRef] [PubMed]
  22. Khusro, S.; Shah, B.; Khan, I.; Rahman, S. Haptic Feedback to Assist Blind People in Indoor Environment Using Vibration Patterns. Sensors 2022, 22, 361. [Google Scholar] [CrossRef] [PubMed]
  23. Moreno-Briseño, P.; Díaz, R.; Campos-Romo, A.; Fernandez-Ruiz, J. Sex-Related Differences in Motor Learning and Performance. Behav. Brain Funct. 2010, 6, 74. [Google Scholar] [CrossRef] [PubMed]
  24. Cid, M.M.; Oliveira, A.B.; Januario, L.B.; Côté, J.N.; de Fátima Carreira Moreira, R.; Madeleine, P. Are There Sex Differences in Muscle Coordination of the Upper Girdle during a Sustained Motor Task? J. Electromyogr. Kinesiol. 2019, 45, 1–10. [Google Scholar] [CrossRef]
  25. Stoet, G. Sex Differences in the Simon Task Help to Interpret Sex Differences in Selective Attention. Psychol. Res. 2017, 81, 571–581. [Google Scholar] [CrossRef] [PubMed]
  26. Casamento-Moran, A.; Patel, P.; Zablocki, V.; Christou, E.A.; Lodha, N. Sex Differences in Cognitive-Motor Components of Braking in Older Adults. Exp. Brain Res. 2022, 240, 1045–1055. [Google Scholar] [CrossRef]
  27. Bertonati, G.; Amadeo, M.B.; Campus, C.; Gori, M. Auditory Speed Processing in Sighted and Blind Individuals. PLoS ONE 2021, 16, e0257676. [Google Scholar] [CrossRef] [PubMed]
  28. Gori, M.; Campus, C.; Signorini, S.; Rivara, E.; Bremner, A.J. Multisensory Spatial Perception in Visually Impaired Infants. Curr. Biol. 2021, 31, 5093–5101.e5. [Google Scholar] [CrossRef]
Figure 1. For the auditory feedback, we used (a) a 4-marker cluster, donned on the dorsal side of the reaching hand. A 3D-printed box (2.5 cm in length, width, and height), with 4 markers attached to a base below it, was the target for reaching. For the vibration feedback, we used (b) five vibration motors connected to an Arduino with Bluetooth strapped to the forearm. The motors were attached to the right hand of the subject to signal “left” (motor #1), “right” (motor #2), “forward” (motor #3), “backward” (motor #4), “downward” (motor #5), and “upward” (motors #1 and #2 simultaneously).
Figure 1. For the auditory feedback, we used (a) a 4-marker cluster, donned on the dorsal side of the reaching hand. A 3D-printed box (2.5 cm in length, width, and height), with 4 markers attached to a base below it, was the target for reaching. For the vibration feedback, we used (b) five vibration motors connected to an Arduino with Bluetooth strapped to the forearm. The motors were attached to the right hand of the subject to signal “left” (motor #1), “right” (motor #2), “forward” (motor #3), “backward” (motor #4), “downward” (motor #5), and “upward” (motors #1 and #2 simultaneously).
Technologies 11 00127 g001
Figure 2. The subjects were requested to place the fingers of their right hand on three stickers, placed sideby-side on the rim of the table. The box was placed on one of the marked locations, set 50 cm (aerial measurement) from this starting position, as depicted herein.
Figure 2. The subjects were requested to place the fingers of their right hand on three stickers, placed sideby-side on the rim of the table. The box was placed on one of the marked locations, set 50 cm (aerial measurement) from this starting position, as depicted herein.
Technologies 11 00127 g002
Figure 3. (a) The time (in seconds) it took to locate the box for each of the three cueing modalities and (b) the path length of the hand (in meters) for the auditory cueing. Outliers and extreme values are often represented with circles and asterisks, respectively.
Figure 3. (a) The time (in seconds) it took to locate the box for each of the three cueing modalities and (b) the path length of the hand (in meters) for the auditory cueing. Outliers and extreme values are often represented with circles and asterisks, respectively.
Technologies 11 00127 g003
Figure 4. Hand travel path lengths for men (N = 19) compared to women (N = 11) when assisted by the verbal cueing modality. Outliers and extreme values are often represented with circles and asterisks, respectively.
Figure 4. Hand travel path lengths for men (N = 19) compared to women (N = 11) when assisted by the verbal cueing modality. Outliers and extreme values are often represented with circles and asterisks, respectively.
Technologies 11 00127 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abu Rass, S.; Cohen, O.; Bareli, E.; Portnoy, S. Comparing Performance and Preference of Visually Impaired Individuals in Object Localization: Tactile, Verbal, and Sonification Cueing Modalities. Technologies 2023, 11, 127. https://doi.org/10.3390/technologies11050127

AMA Style

Abu Rass S, Cohen O, Bareli E, Portnoy S. Comparing Performance and Preference of Visually Impaired Individuals in Object Localization: Tactile, Verbal, and Sonification Cueing Modalities. Technologies. 2023; 11(5):127. https://doi.org/10.3390/technologies11050127

Chicago/Turabian Style

Abu Rass, Shatha, Omer Cohen, Eliav Bareli, and Sigal Portnoy. 2023. "Comparing Performance and Preference of Visually Impaired Individuals in Object Localization: Tactile, Verbal, and Sonification Cueing Modalities" Technologies 11, no. 5: 127. https://doi.org/10.3390/technologies11050127

APA Style

Abu Rass, S., Cohen, O., Bareli, E., & Portnoy, S. (2023). Comparing Performance and Preference of Visually Impaired Individuals in Object Localization: Tactile, Verbal, and Sonification Cueing Modalities. Technologies, 11(5), 127. https://doi.org/10.3390/technologies11050127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop