Next Article in Journal
The Impact of the Implementation of Safety Measures on Frontline Workers’ Safety Accountability: A Saudi Arabian Case Study of a Well Intervention Business Model
Next Article in Special Issue
Augmented Reality for Vehicle-Driver Communication: A Systematic Review
Previous Article in Journal
Wind Impact Assessment of a Sour Gas Release in an Offshore Platform
Previous Article in Special Issue
Modeling the Impact of Driving Styles on Crash Severity Level Using SHRP 2 Naturalistic Driving Data
 
 
Article
Peer-Review Record

Impact of Temporary Browsing Restrictions on Drivers’ Situation Awareness When Interacting with In-Vehicle Infotainment Systems

by Jason Meyer 1,*, Eddy Llaneras 1 and Gregory M. Fitch 2
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5:
Submission received: 11 July 2022 / Revised: 17 November 2022 / Accepted: 2 December 2022 / Published: 7 December 2022
(This article belongs to the Special Issue Human Factors in Road Safety and Mobility)

Round 1

Reviewer 1 Report

Review Safety 1789194

Impact of Temporary Browsing Restrictions on Drivers’ Situation Awareness when Interacting with In-Vehicle Infotainment Systems

 

Unfortunately, my first, brilliant, review was lost when my computer crashed, so this is my second attempt. Maybe not as detailed as before, but hopefully somewhat more nuanced.

 

I am somewhat curious about the concept of ‘model driving’ refered to in the introduction, in conjunction with the use of near crashes as part of the dependent variable in the studies refered to (which is not mentioned in the text). These two methods will probably inflate the effect sizes, as compared to other studies (af Wahlberg, 2009). The numbers may therefore sound impressive, but are not quite so as compared to those of other studies.

This is mentioned in row 66-67, but somewhat vaguely, as it does not say in which direction it goes.

 

The Purpose section could be somewhat more extended. I find that it lacks something about how this is supposed to work. Apparently, it is about having more of an updated mental awareness of the surroundings, so more cognitive capacity can be used to handle the unexpected event. Section 2.8 could be moved to here.

 

more measured responses”

This is an interesting statement, which can be associated with the lack of discussion about the validity of the outcome measures, and the VTTI studies refered to in the introduction.

Apparently, it is here argued that a bit less acceleration is a positive feature, without any reference to cut-off criteria values, such as is routinely used in VTTI studies to identify ‘safety-critical events’. Now, these two approaches are not really compatible (Khorram, af Wahlberg & Tavakoli, 2020).

 

I do not understand how the No pause drivers could see the hazard lights but refrained from looking at them directly. I am also wondering about whether their peripheral vision might have given them some information about the surroundings.

 

The design of the study is somewhat doubtful, as shown by the non-reactions from many of the drivers. I would guess that what was going on was that drivers who actually spotted the obstacle in many cases made the decision to drive over it because they judged that risk to be less than the alternatives of braking hard and/or sverving. This could be argued was the right decision, but whether or not, the possibility of making different choices that could be correct or not make the study impossible to evaluate. What does it mean?

I believe it should be re-run with a tighter design, where there is only one possible correct choice/reaction. This would be very difficult to achieve in a safe way, but still…

This shortcoming highlights the general problem of finding valid outcome variables in traffic safety studies. Face validity is not enough.

 

In the long run, DASS feedback can coach drivers on how to manage their attention to and away from the road.”

Now, this, and other statements about autonomous features in this paper are typical of the technological approach to problems caused by technology; build more technology. I would on the other hand predict the opposite to this coaching hypothesis. Adding a DASS feature would only lead to more behavioural adaptation and no increase in safety.

 

References

Khorram, B., A. E. af Wåhlberg, A. E., & Tavakoli Kashani, A. (2020). Longitudinal jerk and celeration as measures of safety in bus rapid transit drivers in Tehran. Theoretical Issues in Ergonomics Science.

 

af Wåhlberg, A. E. 2009. Driver Behaviour and Accident Research Methodology; Unresolved Problems. Farnham: Ashgate

Author Response

Thank you for the review.  The Purpose section as well as others have been expanded based on your comments.

Reviewer 2 Report

 

 

Comments for author File: Comments.pdf

Author Response

Thank you for your review.  The introduction as well as other areas have been expanded in order to provide sufficient background.

Reviewer 3 Report

The article seems attractive at first. However, the further we go, we see simple hypotheses and analyses. The findings of this article do not seem to be very innovative for researchers; especially because despite having valuable equipment and experiments, the findings are basic and presented with simple analyses.

Author Response

Thank you for your review.  The methods and conclusions have been described in further detail, and highlighted in the updated manuscript.

Reviewer 4 Report

This is an interesting study about drivers' distraction

Author Response

Responses to comments in PDF file.

Author Response File: Author Response.pdf

Reviewer 5 Report

Thank you for asking me to review MS Impact of temporary browsing restrictions on drivers’ situation awareness when interacting with in-vehicle infotainment systems. This is an interesting and timely paper given the increasing complexity of systems and their potential distractibility.

The version of the MS that I am reviewing has blocks of highlighted text – that suggest to me that the paper has already undergone review? I hope for your sakes I don’t bring up too many new issues ?.

Introduction

I found the MS sightly USA-centric, a nod to some global statistics would be appreciated. Similarly, please change “cell-phone” to “mobile-phone”. As an Australian, reading “cell” phone through the paper made me twitch (although that could be the coffee talking).

Coming to the ‘purpose’ section of the paper, this is a story about situation awareness. I think the authors have made the right decision about not turning it into a ‘SA paper’, nonetheless I would like to see a little more around SA. Indeed the there is a lot of research – particularly recent research - in this space and I would just like to see a little more around SA and scene analysis/reconstruction.

Otherwise I found the introduction to be clear and straight forward, leading nicely into the purpose of the study.

Materials and methods

I realise from the authorship that the decision to use Android Auto was probably a pragmatic one, but for the sake of the MS I suggest a sentence or 2 justifying why that particular platform, instead of – say – just using the car infotainment systems. Was it because the use of a platform such as this one allowed for the introduction of the ‘pause’? Also, a sentence explaining what Android Auto actually is would be useful (as a 50-something dinosaur, I had to get a PhD student to explain it to me).

Sorry, what is the Smart Road Highway?

So, looking at the DV’s, you don’t actually measure SA. You are measuring responsivity to the environment as a proxy of SA. That’s OK, but again perhaps back in the intro, I would suggest indicating that’s what you are doing with a few justifications that this is a reasonable approach. While I agree that responsivity is dependent on accurately reconstructing the env, which is itself dependent on SA (or – depending on who you read - the same thing as SA), SA-purists could (and probably will) get upset that you are presenting this as an SA study, but not actually measuring SA as such.

I note that there are few participant details – who did the study? age? Gender? Driving experience? Technology experience? Android Auto experience? Etc. Who the participants are can go a long way to explaining the results.

Did the participants in the ‘pause’ condition also practice the pause-version of the task? Or did the forced pause come as a surprise to drivers?

So, all the interesting stuff happens about 15-20 mins into the drive. This is quite a long time, I assume that this was to lull the drivers into a sense of ‘normality’? Did you have any measures of fatigue? Did the drivers talk to the co-driver? Basically what I am getting at is whether variability in monotony over 20 mins of track driving at 35kms might have impacted the results? Were some people better than others in maintaining arousal?

Could you please make the coloured text bigger in Fig 4? I couldn’t read it when I printed it out.

Otherwise I really like this design – I particularly like the ‘heads-up’ afforded with a hazard warning light signal. One wonders what might have been different without this pre-warning signal.

Results

With the drop-outs, how many people ended up in the two groups?

I’m trying to get a handle on the timing – the hazard lights were activated ~4.8 seconds after the start of the secondary task – was this during the pause phase of that condition? i.e, the ‘pause’ participants detected the hazard light because they were actually looking at the road during the pause event?

Please add figure axes

Not a comment for response, this is just to make sure I have understood the results: precautionary behaviour of drivers engaging in any precautionary response – most people (14 out of 15) in the ‘pause’ condition saw the hazard but only 1/3 responded and 50% of people in the no-pause condition didn’t see the hazard, and only 20% of those 50% responded (assuming that you can only respond if you have seen it?). Are the numbers in this percentages relative to the number of people who saw the light? Or people overall in each condition?

Ok, so here is the reason I am asking; 5 people in the pause condition responded, and presumably it was 5 of the 14 who reported seeing it – this is 35%. Similarly, 8 people in the no-pause condition reported seeing it, and 3 people responded. This is 37%. This doesn’t change your argument (I think it just makes it slightly more accurate) in that this would still be NS, but the trend is in the opposite direction – slightly more people in the no-pause condition who saw the hazard responded…I think that this is interesting…given that it is NS, not a lot you can say about it, but interesting nonetheless ?

 

I agree that it is interesting that 20% of ‘pause’ drivers failed to respond…presumably they had more lead-time to assimilate the information, ah now that I have read on, also reflected in the response magnitude. Related to this and my point above – did everyone actually see the muffler to avoid it? It would be an interesting case of inattentional blindness if people didn’t even see the muffler? Then the percentages could be done relative to the people who detected the muffler as I suggest above.

                Ditto for response latencies

 

I don’t quite follow Fig 11, what is the x-axis? I am also going back to try and get my head around the numbers…so, 7 people in the ’pause’ condition avoided the muffler out of 14 who saw it? Or 14 in the condition (if this, then why now 14?)? Ditto, 6 people in the no-pause condition out of the 11 who saw it? Or were in the condition (again, if the latter, why not 15?). So now, resolving this with Fig 11, it only has 9 people in the No pause condition and 8 in the pause condition. Does this only reflect people who decelerated?  Shouldn’t the numbers in Fig 11 marry-up with the numbers in Fig 7?

 

Discussion

In general this is pretty go, although I would still like to see a more nuanced breakdown in Table 1 relative to the people who actually saw the muffler fall? If everyone saw it, then that’s ok, but you just need to say that.

                OK, reading on, I am assuming that everyone saw it, but 21% saw it but didn’t respond.

Otherwise this section is fine, a nice discussion of the results and study overall.

Author Response

Response to comments attached in PDF file.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I am satisfied with the amendments made by the authors further to my recommendations and believe that the paper has been improved accordingly.

Author Response

Responses are uploaded as PDF file.

Author Response File: Author Response.pdf

Reviewer 3 Report

Not suitable for publication. 

Author Response

Responses to comments in PDF file

Author Response File: Author Response.pdf

Reviewer 5 Report

Thanks for the revised paper. I am happy to accept in current form

Back to TopTop