Next Article in Journal
Preliminary Study on the Knowledge Graph Construction of Chinese Ancient History and Culture
Next Article in Special Issue
Checklist for Expert Evaluation of HMIs of Automated Vehicles—Discussions on Its Value and Adaptions of the Method within an Expert Workshop
Previous Article in Journal
Traffic Accident Prediction Based on Multivariable Grey Model
Previous Article in Special Issue
Standardized Test Procedure for External Human–Machine Interfaces of Automated Vehicles
 
 
Article
Peer-Review Record

Supporting Drivers of Partially Automated Cars through an Adaptive Digital In-Car Tutor

Information 2020, 11(4), 185; https://doi.org/10.3390/info11040185
by Anika Boelhouwer 1,*, Arie Paul van den Beukel 2, Mascha C. van der Voort 2, Willem B. Verwey 3 and Marieke H. Martens 4,5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Information 2020, 11(4), 185; https://doi.org/10.3390/info11040185
Submission received: 28 February 2020 / Revised: 26 March 2020 / Accepted: 27 March 2020 / Published: 30 March 2020

Round 1

Reviewer 1 Report

1.The contribution of this manuscript should be highlighted to clearly analyze advantages of the proposed method. 

2.The format of reference part has many sever problem, which should be corrected by author carefully, according to the template of information.

3.Authors should consider  citing the following works. (a) A Hardware Platform Framework for an Intelligent Vehicle Based on a Driving Brain. (b) Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making. (c) Hardware and software architecture of intelligent vehicles and road verification in typical traffic scenarios. (d) Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment.(e) Vehicle Trajectory Prediction by Integrating Physics-and Maneuver-based Approaches Using Interactive Multiple Model. (f) Research of Intelligent Vehicle Variable Granularity Evaluation Based on Cloud Model. (g) Semantic segmentation–aided visual odometry for urban autonomous driving.

Author Response

Please see attachment. 

Author Response File: Author Response.pdf

Reviewer 2 Report

This article presents a study examining the benefits of a “digital in-car tutor” for teaching individuals about advanced driver assistance systems. The authors described the design of a tutoring system that can provide real-time instruction and feedback to a driver about the functionality and limitations of ADAS features based on contextual information about the surrounding environment and the driver’s workload, which the authors called adaptive communication. The authors propose that this instruction will help the drivers learn the situations where the automation would work or would not work allowing for more appropriate reliance behavior. The authors then evaluated the DIT system in a simulator study and compared its effectiveness against the use of an information brochure. Participants were assigned to one of the training conditions and then asked to drive in an initial “training drive” or session one. During this drive, participants were exposed to 10 scenarios, corresponding to 5 different ADAS systems (adaptive cruise control, lane centering, obstacle detection, traffic light and priority sign detection, and priority road markings detection) with each ADAS feature having one scenario where it was safe to use the automation and one scenario where the automation should be turned off. The DIT system was active for the DIT group and provided guidance over 10 scenarios, while the IB group were given 10 minutes to read the brochure before the training drive where there were not given further guidance. After this training drive, the participants drove through a second session where they were exposed to 8 traffic situation scenarios, 4 of which required the automation system to be turned off due to automation limitations. Finally, after two weeks some participants were invited back for a third session where they drove through 8 similar traffic scenarios as in session two to examine whether the training effects persisted. The authors measured reliance behavior, take-over quality, and user acceptance of the DIT system. Overall, the study found that the DIT resulted in better reliance and take-over quality during the initial training session (session one), but the differences were no longer significant for sessions two or three. Finally, the authors reported that participants in the DIT group had favorable acceptance ratings of the tutor. The authors concluded that the DIT had promising results, and may specifically help reduce over-reliance behavior (even though this finding was mostly found during the training session with the DIT system was active). Furthermore, the authors point out that the DIT system may lead to higher under-reliance behavior where participants were more likely to take-over when it was not required compared to the IB condition.

Overall, I thought this was a well written paper with clear communication. It is also a very interesting and important topic, as training for ADAS systems remains a major challenge that requires the development of new methods of delivery and new methods for evaluation of its effectiveness. In this paper, the authors propose the use of a digital in-car tutor that can provide feedback and guidance while participants experience relevant driving situations, and the type of guidance provided is tailored to reduce the cognitive load of learning the functionality and limitations of the system. While this may be a promising method of training delivery, the results of the study do not provide strong evidence that it is an effective method of training compared to the information brochure. Furthermore, the paper only evaluates performance outcomes such as reliance behavior and take-over quality, but not the specific items the tutor was trying to teach (e.g., knowledge about the types of limitations and capabilities, information about specific ADAS features). While performance outcomes are desirable, it may be difficult to understand the benefits of a tutoring system through just those measures in a simulator study.

The major differences between the DIT and IB system were found in session one, which was the training session for the DIT group. The description of the DIT presentation didn’t explicitly state when the prompts and guidance were provided, but it is not surprising that reliance behavior would be directly impacted when guidance is provided about that specific situation prior to the situation occurring. Because of this, I do not believe that the results from session one are a measure of training performance; rather they show the participants performance when guided by an information cueing automation that helps direct the participants visual attention to relevant external cues. I think this is clearly represented in the scenarios that require the detection of specific visual cues in the world that may be easy to miss (e.g., obstacle detection, road markings, traffic signs, etc), where the DIT system vastly outperformed the IB group during session one. Hopefully the participant would gain benefit from this automation, but the results in the subsequent sessions show a large drop off in performance to a point where it is not statistically different from the IB condition.

In fact, I was impressed by the performance of the IB group after the initial “training session”. It looks like this group benefited greatly from just experiencing the automation in different situations and was almost able to achieve the same performance as the DIT group. The true differences, if any, between the IB and DIT groups are likely to be much smaller and subtle and may require other methods of assessment. It would have been very useful to have a test of how well participants understood the different ADAS systems in a knowledge test after the training, or if they could identify which system failed during the subsequent training sessions. These methods would help us understand whether the tutoring was helping the participants gain a better understanding and mental model of the automation. The introduction also provides background on the adaptive communication and type of guidance provided, but the study never evaluates whether these goals were achieved.

Instead, the outcome variables used do not provide strong evidence that the DIT training resulted in better performance compared to the IB condition. The authors do point out that most drivers do not even spend 10 minutes reading a manual, but 10 minutes is a very short duration and there was no assessment of whether they actually read or understood the material after this initial 10 minute period. Overall, the current results weaken the contributions of the paper, especially since the authors make some overly strong claims about the benefits of the DIT system in the body of the paper, especially as a method of training.

The finding that under-reliance and over-reliance behavior may differ between the two training methods was an interesting finding (but not strongly supported by the inferential stats). The consequences of these two categories of outcomes would be very different. This was interesting because in the IB group, participants would have received clear feedback when they failed to take over (due to the crashes), but they may not have understood why they crashed, while those in the DIT group would have been primed with information about a specific ADAS feature prior to the crash (or near crash). This would be a point that is worth discussing further. Also, why wasn’t there a generalized linear mixed model built that separated out correct reliance and correct take-over behavior? Right now, the model built uses “correct automation use” as the dependent variable. The results make it seem like participants are being more conservative with their take-over judgments after the DIT training, and signal detection theory may be a good framework to explore these changes in sensitivity and judgment criterion.

The acceptance results are difficult to interpret without a comparison group; were the IB group asked to rate the usefulness of their training method?

Why was the sample for session 3 smaller than session 2? Was it a sample of convenience or were certain individuals dropped on purpose?

Was the order of the scenarios in sessions 2 and 3 randomized? It seems like the order was not randomized in session 1, was there a reason for this choice?

A couple of issues that could improve the clarity of the paper:

The driving scenarios in sessions 2 and 3 are not tied to specific ADAS features as the training scenarios were. It would help the reader with understanding the benefits of the training if you discussed which ADAS features were operating outside of its operational design domain. I had a tendency to want to compare the scenarios in Table 3 based on their position (e.g., ACC1 vs. T1), so it would be easier for the reader if similar types of scenarios are grouped together.

Throughout the introduction, it was not made clear whether the DIT system was meant to be deployed as part of a simulator-based training program or if it would be deployed in actual vehicles. Currently, the study presents it like it is the former.

Examples of the types of theory and reflections/elaborations used by the DIT would have been very helpful, particularly in the paragraph at the top of page 5.

Some smaller wording and typo issues:

Line 41: “a quarter of all drivers do not receive…”

Line 181: replace the word stimulated with “told”

Line 286: “, starting directly after the participant turned…”

Line 296: the exponent on your chi was not correctly typeset

Line 313: the two Ns have the same subscript (session2)

Line 434: the line “Furthermore, showed intent to use the DIT” is missing a word

 

 

Author Response

Please see attachment. 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thank you for making the suggested changes, it appears that many of comments were due to confusion about the nature of the DIT training. After the rewording and your comments it became much more clear why the differences in session 1 are meaningful. It may be useful to explicitly say that Session 1 would represent the first time a driver uses the automation on the road; in the DIT condition this would be when the trainer will provide just-in-time training on the automation and for the IB group this would be the first time they drive after learning about the automation form the IB. In terms of the multinomial logistic regression, was the training x session interaction non-significant? and were the random effects the same as before? For the "priming" issue discussed in the limitations, my concern wasn't necessarily about the modality of the presentation (visual vs auditory), but about what type of information is provided during the session 1 scenarios. Since there are two scenarios for each ADAS system, did the DIT provide the same instructions during both scenarios? Or did they discuss capabilities during the reliance scenarios and limitations during the take-over scenarios (i.e., they were prompted with the correct response prior to it occuring)? I know that this is the purpose of the system (in order to provide contextually relevant training), but there should be a statement that the DIT is not meant to be another layer of automation that can always prompt drivers to upcoming issues, rather it identifies specific scenarios that it can use as teaching moments. Finally there were a few small errors in the new added text and figures, for example: - The first usefulness question in Figure 6 is cut off - line 514, you can remove the "a" in "In a simulator training".

Author Response

Please see the attachement. 

Author Response File: Author Response.docx

Back to TopTop