Next Article in Journal
Ability-Based Methods for Personalized Keyboard Generation
Previous Article in Journal
Cognitive Learning and Robotics: Innovative Teaching for Inclusivity
 
 
Article
Peer-Review Record

Smart Map Augmented: Exploring and Learning Maritime Audio-Tactile Maps without Vision: The Issue of Finger or Marker Tracking

Multimodal Technol. Interact. 2022, 6(8), 66; https://doi.org/10.3390/mti6080066
by Mathieu Simonnet
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Multimodal Technol. Interact. 2022, 6(8), 66; https://doi.org/10.3390/mti6080066
Submission received: 4 July 2022 / Revised: 20 July 2022 / Accepted: 30 July 2022 / Published: 3 August 2022

Round 1

Reviewer 1 Report

The manuscript titled “Smart Map Augmented: Exploring and Learning Martime Audio-Tactile Maps Without Vision: The Issue of Finger or Marker Tracking?” proposes an investigation on the comparison of two different spatial exploration modalities of a maritime map without vision. Twelve blindfolded persons were enrolled in the study and completed two spatial tasks, i.e., localization or estimation of distance and direction tasks, in two conditions, i.e., a finger and a marker tracking conditions, on a 3D-printed audio-tactile nautical chart. Results showed that participants were faster to find geographic seamarks in finger condition in the localization task. No differences between conditions were found in distances and directions estimations. Spatial reasoning took less time in marker condition with respect to finger condition. Author discussed his results in light of previous literature and gave some hints for future research as well as for practical use of the tool presented.

 

I carefully read the manuscript, and I think it may be of interest for the readers of Multimodal Technologies and Interaction. I also think that some minor points need to be addressed before the manuscript could be published as a research article. Below there are my comments and suggestions.

 

Overall, the manuscript is very well-written and properly addresses the interesting issue of the spatial cognition abilities in persons with visual impairments. It is also relevant that this research employs a simple but effective technology and is directed to sailors. I found that the aim of the study and the methodology employed are clear and detailed, as well as the explanations provided in the discussion section. I only have few remarks:

 

Title: I think that “martime” should be “maritime”

 

Page 2 subsection “1.3. Spatial cognition and tactile exploration”: I think that this subsection would benefit from an in-depth study about haptic encoding in blind or visually impaired people. There is an interesting literature about this topic throughout the last ten years. Please find below some suggestions:

Ruggiero, G., Ruotolo, F., & Iachini, T. (2012). Egocentric/allocentric and coordinate/categorical haptic encoding in blind people. Cognitive Processing, 13(1 SUPPL), S313-S317. doi:10.1007/s10339-012-0504-6

Iachini, T., Ruggiero, G., & Ruotolo, F. (2014). Does blindness affect egocentric and allocentric frames of reference in small and large scale spaces? Behavioural Brain Research, 273, 73-81. doi:10.1016/j.bbr.2014.07.032

Ruggiero, G., Ruotolo, F., & Iachini, T. (2021). How ageing and blindness affect egocentric and allocentric spatial memory. Quarterly Journal of Experimental Psychology, doi:10.1177/17470218211056772

 

Page 3 subsection “1.5. Research question”: Can you advance some hypotheses given findings from previous literature?

 

Page 3 subsection “2.1. Participants”: Please provide more information about sociodemographic data of participants, e.g., Were they matched for age? Had they a comparable experience of navigation and/or nautical maps?

Page 4 line 138: “Direction estimations” should be “Distance estimation”

Page 6 line 179: “though” should be “thoughts”

 

I think that the discussion section could be integrated following the suggestions given above, especially referring to studies investigating coordinate/categorical haptic encoding in blind people, as this is properly the case.

Author Response

Dear reviewer,

I am grateful for all your these precious remarks.

Already familiar with ego and allocentric spatial frames of references, I haven't study categorical and coordinate spatial relations from Kosslyn (1994). Thank you for this advice, i will take time to read more deeply this very interesting works. For this article, I mainly read and included Ruggiero et al. 2012 in another paragraph in the subsection “1.3. Spatial cognition and tactile exploration”. I think that this explanation about non visual spatial encoding improve this article.

Then these notions were used to propose an hypothesis in the research question subsection : "Taking into account previous works, we hypothesize that the marker condition improves allocentric and coordinate spatial representation. Indeed, positioning the green piece at different seamarks places should require memorizing more deeply the seamarks positions and then foster object-to-object representation."

To provide more information about the participants, I reported the average of their age and the answers they made to the 3 following simple questions: Are you familiar with paper charts? maps on screen? numerical cardinal directions? This give an idea how this sample was homogeneous.

The discussion took advantage from the notion of allocentric and coordinate spatial relationships to connect with the previous quoted works. However, regarding to the findings of Ruggiero, it will require more significant differences and more questions to the participants including categorical vs. coordinate spatial relations, and ego allocentric spatial frames of reference. This will be my future works. Thank you very much.

Kind regards, 

Reviewer 2 Report

In this paper, the accuracy and time required for multimodal tasks with auditory, tactile, and kinaesthetic senses were examined using a voice-assisted tactile map. Participants were blindfolded to prevent visual information. The voice assist was triggered by moving a finger or a marker close to a target on the map, and the name of the target was played back by text-to-speech software. In the single-target localization task, voice assist was effective in reducing the time required. On the other hand, for the localization task of the relative position (distance and direction) of two targets, the voice assist was effective up to the localization of each target, but it sometimes interfered thereafter. As a result, the time required after completing the localization of each target was shorter for the marker condition that could trigger the voice assist at any time even when participants used the tactile senses of both hands.

 

As summarized above, this paper includes content worthy of publication in Multimodal Technologies and Interaction, but the experimental conditions and tasks are a bit confusing and require the following additional explanation.

 

(1) How was the voice assist presented? Would it be done by loudspeakers or headphones?

 

(2) It is difficult to read the difference between the finger and marker conditions. Reading the discussion, it seems that the subject's behavior is hardly restricted. On the other hand, from the description of Lines 77-80, it reads as if there is a difference in the way of movement: in the case of the finger, the finger is moved while touching the map; in the case of the marker, the marker is moved after moving it away from the map. It is necessary to add how the subjects were instructed about the difference between the experimental conditions for the finger and the marker. Add any other behaviors that were restricted by the instructions.

 

(3) Line 138: Direction → Distance

 

(4) Figure 6 is missing, or the figure numbering is incorrect.

 

(5) Figures 5, 7, 8: Add explanations of what data (median, quartiles, standard deviation, 95% confidence interval, max/min, etc.) are being displayed.

Author Response

Dear reviewer,

Thank you for these precious remarks. My responses are in the attached article and some explanations are in the text below.

(1) The description of the the voice assist presentation is added in the subsection "2.2 Equipment". Indeed, readers can understand that participants used loudspeakers to clearly listen to the vocal announcements.

(2) To help the readers to better understand how the finger vs. marker conditions naturally influenced participants behaviors, the subsection 1.4 "Interaction Design" has been modified. A short paragraph has been added in the section 3 "Experimental tasks and data collection" to precise that participants exploration were not restricted. The first paragraph of the discussion were also modified to make it clearer. I hope this is enough to help the reader to understand that both conditions finger vs. marker respectively implies different participants behaviors (i.e. explore first and then use vocal info in marker condition).

(3) The mistake between the words direction and distance line 138 is corrected.

(4) The numbers of the figures are fixed.

(5) A description of the box plots on the figures 5 to 8 has been added at the beginning of the results section to help the reader to understand the presented data.

I hope this will answer to your questions. 

Kind regards, 

 

Back to TopTop