Next Article in Journal
Developing a Serious Game for Rail Services: Improving Passenger Information During Disruption (PIDD)
Next Article in Special Issue
Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects
Previous Article in Journal
A Comprehensive Study of ChatGPT: Advancements, Limitations, and Ethical Considerations in Natural Language Processing and Cybersecurity
Previous Article in Special Issue
Is Users’ Trust during Automated Driving Different When Using an Ambient Light HMI, Compared to an Auditory HMI?
 
 
Article
Peer-Review Record

Principles for External Human–Machine Interfaces

Information 2023, 14(8), 463; https://doi.org/10.3390/info14080463
by Marc Wilbrink 1,*, Stephan Cieler 2, Sebastian L. Weiß 2, Matthias Beggiato 3, Philip Joisten 4, Alexander Feierle 5 and Michael Oehl 1
Reviewer 1:
Reviewer 2: Anonymous
Information 2023, 14(8), 463; https://doi.org/10.3390/info14080463
Submission received: 29 June 2023 / Revised: 27 July 2023 / Accepted: 5 August 2023 / Published: 17 August 2023

Round 1

Reviewer 1 Report

Overall comments:

This is an interesting and thought-provoking list of principles of eHMI and dHMI usage in CAVs. I believe it’s of value to the Information readership and well-sourced, but its lack of original experimentation and primarily regurgitating existing conclusions from the literature reduce its originality and overall impact.

Typos and minor edits:

Line 117: “dHMI” should be defined here, its first use (presumably “dynamic HMI”—since HMI has been previously defined in the paper, the full initialization definition isn’t required).

Specific comments:

Abstract / Intro: Overall, there’s quite an emphasis on dHMI in addition to eHMI (including as its own principle, A6), and I think the abstract and introduction should reflect this.

Line 157: What does “controllability” mean in this context? It’s unclear to me how it specifically relates to unintended consequences.

Line 183: Could you give a more specific example of how the CAV provides prosocial communication via eHMI during a prosocial maneuver (e.g., allowing another vehicle to merge / turn in front of it). A more concrete example here would help readers understand why eHMI offers advantages beyond the dHMI of the maneuver itself.

Line 241: Principle A6 seems to impose a heavy limitation on the development of CAV dynamics—in short, they will be limited to performing maneuvers in “established and familiar” ways, even If automakers are able to find better ways to conduct maneuvers (i.e., safer, more efficient), or if they are required, by law, to comply with road laws even when most drivers don’t. For example, many of the crashes observed with CAVs in California (which requires reporting of all CAV crashes) seem to occur when the CAV is stopped, as at an intersection / stop sign / merge, where it appears to be following the letter of the law, but many / most human drivers do not. In fact, a Tesla recall in the United States was implemented after Teslas were observed not coming to complete stops at stop signs, but, instead, behaving like human drivers (see: https://www.npr.org/2023/02/17/1157773594/nearly-363-000-cars-are-recalled-by-tesla-to-fix-self-driving-flaws). Is it possible that eHMI can be used to signal to ORUs to explain why a vehicle is making a maneuver in the way it is? Can you address this point in this section—that, while the principle of dHMI resembling human drivers when possible is good, there may be reasons (legal, safety) where exceptions are required? (This seems to have some overlap with Principle B5.)

 Line 404: Should this include intention to continue in an already on-going behavior (e.g., intention NOT to yield at a crosswalk, intention NOT to slow to allow a merging vehicle). CAVs are continuously updating their plans, and, thus, continuously updating their intent. Since this principle (B2) falls under the category of “situations,” it’s unclear what situations are being defined here. Does it mean continuously broadcasting future intent, or doing so whenever an ORU is present? I may be misunderstanding this principle, in which case some clarity might be helpful.

The English is mostly acceptable, there are a few places where minor language edits may be needed.

Author Response

Dear Reviewer,
thank you so much for your time and effort while reviewing the contribution. 
We appreciate your work and tried to address all your points.

Best regards, 

Marc Wilbrink 

Author Response File: Author Response.docx

Reviewer 2 Report

The paper is interesting and provides useful ideas. Furthermore, statements are supported by references. The main limitation is the fact that there is not a validation (or possibility of validating). Please, discuss this issue. How can we be sure that the statements presented in the paper are true, useful and complete?

Furthermore, other works related with ethical issues of autonomous vehicles consider a similar approach. Highlight the similarities and differences, as well as main contributions.

And one minor comment: the paper must finish with a conclusion section.

Author Response

Dear Reviewer,
thank you so much for your time and effort while reviewing the contribution. 
We appreciate your work and tried to address all your points. 

Please see the attachment.

Best regards, 

Marc Wilbrink 

Author Response File: Author Response.docx

Back to TopTop