Next Article in Journal
Simulated Prediction of Roof Water Breakout for High-Intensity Mining under Reservoirs in Mining Areas in Western China
Previous Article in Journal
Handgrip Strength as a Distinguishing Factor of People Training Martial Arts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing User Engagement in Shared Autonomous Vehicles: An Innovative Gesture-Based Windshield Interaction System

1
Department of Mechanical Engineering, Politecnico di Milano, Via La Masa 1, 20156 Milano, Italy
2
Department of Design, Politecnico di Milano, Via Durando 10, 20158 Milano, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9901; https://doi.org/10.3390/app13179901
Submission received: 27 July 2023 / Revised: 23 August 2023 / Accepted: 30 August 2023 / Published: 1 September 2023

Abstract

:
With the rapid advancement of autonomous vehicles, a transformative transportation paradigm is emerging in the automotive industry, necessitating a re-evaluation of how users engage with and utilize these evolving settings. This research paper introduces an innovative interaction system tailored for shared autonomous vehicles, focusing on its development and comprehensive evaluation. The proposed system uses the car’s windshield as an interactive display surface, enabling infotainment and real-time information about the surrounding environment. The integration of two gesture-based interfaces forms a central component of the system. Through a study involving twenty subjects, we analyzed and compared the user experience facilitated by these interfaces. The study outcomes demonstrated that the subjects exhibited similar behaviors and responses across both interfaces, thus validating the potential of these interaction systems for future autonomous vehicles. These findings collectively emphasize the transformative nature of the proposed system and its ability to enhance user engagement and interaction within the context of autonomous transportation.

1. Introduction

The automotive industry is undergoing rapid and radical changes driven by technological innovations, cultural shifts, and socio-economic factors, leading to new mobility solutions. These changes have influenced urban mobility and transformed how people interact with cities, introducing a new urban dynamism. The term smart mobility, closely associated with the broader concept of smart cities, is essential for enhancing city efficiency and reducing the environmental impact of transportation systems. In this scenario, the fifth-generation (5G) communication technology plays a significant role in developing smart cities with low latency and reduced energy consumption.
The changes reshaping the future of the automotive industry mobility can be summarised in four main trends: (1) electrification to reduce reliance on fossil fuels, (2) autonomous driving technology, (3) connectivity of cars to the online world, and (4) sharing mobility. Cars are becoming part of a complex ecosystem where an intelligent and connected vehicle can simplify the driver’s life, increase road safety, improve efficiency, and minimize environmental impact. The research project presented in this paper focuses on leveraging 5G technology to develop an interface for an electric and autonomous vehicle. The aim is to create a connected and safe driving experience, integrating the potential of 5G connectivity to enhance driver experience and road safety. However, managing different and concurrent information makes implementing new and advanced interaction systems essential.
The automotive industry uses windshields to convey information about a vehicle’s functioning to reduce driver distraction, and the so-called Head-Up Displays (HUDs) have been installed in some vehicles. This technology originated from aeronautical applications, allowing flying information to be displayed on the same visual plane as the surrounding objects [1]. By implementing the HUD system in vehicles, drivers can respond more quickly to emergency alerts and maintain more consistent speed control. As demonstrated in [2], HUD can reduce mental stress for drivers and enhance their driving performance, even for those using it for the first time. On the other hand, some researchers have claimed that HUD can be a disadvantage while driving [3] because the information displayed is between the driver and the external environment. Although a HUD can provide critical information such as directional indicators and speed, it may obstruct a portion of the driver’s view of the road. For this reason, the existing HUD may create confusion because of the inconsistent installation location, shape, and information offered. This could increase the risk of accidents due to the driver’s lack of visual focus [4].
Considering all these issues, HUD development is going on, trying to blend better and better the real world with the digital one. In 2020, at the Las Vegas CES, Panasonic presented a new type of augmented reality (AR) HUD [5], as shown in Figure 1. This is the first HUD to react with the external environment in real-time and occupies a big part of the windshield even though the interaction modalities are still delegated to standard solutions.
Drivers usually interact with these kinds of HUDs with knobs, levers, and buttons on the dashboard, often just basic setups. However, the upcoming advanced functions of autonomous vehicles could turn the windshield into the primary medium to visualize contents during the journey. For this reason, it will be essential to design new interaction modalities to make the most of HUD’s advanced functionalities.
Studies point out that touchless gestures are faster and more comfortable than physical interfaces [6]. Mid-air gestures are usually more intuitive and easier to remember because they are part of human communication [7]. For this reason, there have been an increasing number of mid-air gesture applications in the automotive field. Rümelin et al. [8] have shown that pointing as a lightweight form of gestural interaction is reliable, achieving a recognition rate of 96% in the lab. Ohn-Bar [9] presented hand gestures to steer infotainment systems. May et al. [10] have shown that multimodal air gestures have advantages over conventional touch-based systems in navigating menus in the vehicle in terms of safety but with more extended task completion and mental workload. Riner et al. [11] have provided the first standardization of the in-car gesture interaction space. Brand et al. [12] have presented a HUD that was indirectly manipulated by pointing gestures. Although there are several studies conducted on natural interaction between drivers and their vehicles, they have identified a few drawbacks with the new interface. That may depend on people not being ready to use such advanced touchless interfaces without specific force feedback or because the graphic user interface (GUI) is not designed according to specific guidelines following the interaction device.
This paper presents an interaction system designed for shared autonomous vehicles. The suggested system utilizes the car’s windshield as a display surface for interactive infotainment and real-time data about the surrounding environment. Integrating two gesture-based interfaces, both providing force feedback to the user’s hand, is a crucial aspect of the system. One device involves simulating feedback via ultrasonic waves, while the other provides a physical surface for interaction. Through an extensive study involving twenty subjects, user experience facilitated by these interfaces was analyzed and compared. The research objective is to assess whether notable variations in the ease of use and effectiveness of standard and advanced gesture-based devices exist.
The research delved into the difficulties of managing the advanced features of self-driving cars, which may rely on the windshield as a primary display. This necessitates interactive systems that can seamlessly navigate through diverse types of information, which will be achieved by detecting potential limitations and comprehending how people interact with such systems when encountering them for the first time. The results reveal the potential of the suggested system to revolutionize user engagement and interaction in autonomous transportation. It demonstrates how the system can significantly enhance the overall experience.
The structure of the paper is as follows: Section 2 outlines the GUI development, the implemented interaction gestures, and the driving simulator utilized for testing; Section 3 provides a comprehensive overview of the test campaign procedure; Section 4 presents the preliminary insights gained from the test results. Finally, Section 5 and Section 6 discuss and draw conclusions based on the findings.

2. Materials and Methods

This research assumes that the combination of 5G connectivity and IoT will significantly impact the automotive industry, turning them into comprehensive digital platforms instead of mere modes of transportation [13]. With the increasing convergence of digital and physical elements, cars are changing their configuration and how users interact. The user’s mobile devices could be integrated and interconnected with the car’s system, enabling a cohesive and synchronized experience [14], and cars become an immersive and personal space providing various activities, such as work or leisure.
As a result, a novel interaction system was created that enables the smooth sharing of real-time information between people, vehicles, and road infrastructure. The system uses the car’s windshield as a display surface and a gesture-based touchless interface, providing mid-air haptic feedback or a trackpad for the interaction. This solution should reduce the user’s mental effort when interacting with the interface, as discussed in [15].

2.1. Graphical User Interface

The GUI of a car’s interaction system plays a fundamental role in providing the correct information to the passengers of an autonomous vehicle. Consequently, the GUI design followed an iterative process that allowed for continuous experimentation by testing the interface on the interaction with the gestures acting on the design. One of the initial design challenges was to consolidate information that would typically be spread across multiple screens in a car into a single front area of the windshield. Various information architectures and GUI were tested to determine the information’s optimal placement and cohesive stylistic layout. This approach enabled a seamless interaction with the external environment, capitalizing on the 5G connection, enabling the interface to interact with the surrounding environment and provide more accurate information.
For the implementation of the GUI, four main steps have been followed, which are summarised below:
  • Conducting stylistic research on existing interface systems in cars to establish the UI’s tone of voice, finalize the interface’s color scheme, and select appropriate icons [15];
  • Defining and organizing valuable information for the user during the driving experience, ensuring it aligns with the service provided;
  • Simultaneously working on GUI design and implementing the haptic gesture system to ensure interface alignment and consistent user experience;
  • Iteratively testing design proposals in a virtual environment to simulate and assess the effectiveness of the interface on a HUD.
As a result, the GUI balanced minimalism with the need to provide all the necessary information for orientation and reassure the user, considering the atypical situation the latter will encounter during the driving experience. In initial GUI testing, a conservative approach was taken, aligning with the stylistic dimension of existing car applications. In the first version, integration with the external environment was not emphasized, and the GUI was an opaque band that grouped information on the bottom of the windshield (Figure 2). While in the final version, it was decided to emphasize the integration with the outside environment. After conducting initial tests, a new direction was chosen that drew inspiration from video games [15], highlighting the transparent capabilities of the HUD.
Drawing inspiration from video games provided a significant stylistic advantage, surpassing initial assumptions. This fusion of automotive trends with the visually striking aesthetics of video games contributed to the distinctive character of the interface. Moreover, this approach facilitated the reduction of information overload by adopting a proactive strategy, where relevant information is selectively displayed based on specific conditions and user needs.
The design of the GUI follows the main gesture interaction models (discussed in the next section). It is essential to ensure that the elements of the GUI are coherent with human gestures to make human interaction easier. For this reason, the main menu was designed with five items following the designed finger interaction. The layout of the items on the sub-menus follows the same logic, placing the items in a grid or making them perceived as scrolling to favor swipe interaction (see Figure 3 and Figure 4).
The final layout of the screen was organized into three distinct areas:
  • The left side is dedicated to the driving information area, including speed and driving mode details;
  • Navigation Info is positioned in the middle to reassure the user by displaying the route and road situation, including an interactive map and traffic updates. A 3D representation of the vehicle is projected onto the interface, allowing the user to proactively monitor the car’s state in the external environment;
  • The third area on the right is dedicated to the navigation menu, which has been limited to five main items (navigation, calls, music, documents, points of interest) to accommodate finger interaction.
Furthermore, to enrich the immersive experience and interaction between the car and the smart city, the user receives information from the surrounding buildings by proactively integrating with the outside and providing information about the surrounding buildings while the vehicle moves. An algorithm was developed to handle the parallax error typical in projection onto transparent screens. This will allow images to be displayed on the windshield coherently with the elements to be highlighted in the external environment and give the possibility to show augmented information about the surroundings. Thus, the layout of the interface changes according to the driving situation by exploiting the total size of the windshield to immerse the user in a personalized environment or by minimizing the information at the bottom of the windshield to allow the user to look at the road.

2.2. Interaction Devices and Gestures

The current trend in the automotive industry is to have multiple screens that provide information to drivers and passengers, usually reacting to the human touch. Interacting with hands on a car windshield is not feasible, and using remote buttons, levers, or knobs increases the number of iterations needed to access various functions. Consequently, we selected two gesture-based interfaces to compare their effectiveness in controlling different functionalities displayed on the windshield. The former is the UltraLeap Stratos [16]. Using two infrared cameras, this device can track the hand’s position in 3D space and provide haptic feedback in mid-air using an array of ultrasonic emitters. It can enable a mid-air gesture with force feedback where the users are not required to touch any surface. The latter device, instead, is a standard capacitive trackpad that receives the touch input through a series of capacitors that sense the changes of potential charge across the screen when a finger is positioned above the pad’s surface. For the development, we used a tablet with an opaque screen cover. We considered this second interaction a reference for the test due to its more comprehensive application, even outside the automotive field.
To interact with the information shown on the windshield and to evaluate the different input techniques, six different interaction gestures (see Figure 5) were developed for the Stratos and the trackpad:
  • Finger interaction: By keeping the palm and fingers parallel to the floor, the users can activate a specific menu when a single finger is blended. For the trackpad, due to the impossibility of determining the specific finger doing the interaction, a different methodology has been developed: main sections are activated according to the number of fingers positioned on the trackpad;
  • Swipe interaction: The hand movement in the 3D space from right to left gets detected and interpreted to return a direction; this movement is used to swipe between different menu elements on the windscreen. The same interaction is used but performed in the 2D space for the trackpad;
  • Grid interaction: The information on the windshield is selected by moving the hand through a virtual grid positioned parallel to the floor and perpendicular to the windscreen. The interaction is the same for the trackpad, but fingers should swipe in the direction of the element to select it;
  • Confirm: To confirm the selected menu elements, the users should perform a grab gesture by bending all the fingers from an open position to a closed one (“fist”). For the trackpad, they only need to touch and release the surface of the pad with a finger;
  • Back: To return from a menu to a previous one, the user should rapidly swipe up, returning to the initial position. For the trackpad, they should perform a quick swipe with their fingers from the center of the pad to the bottom part;
  • Volume: To modify the music volume, the users should put two fingers near each other (“pinch”) and then move their hand on the axis perpendicular to the floor to turn up and down the volume. For the trackpad, they should put two fingers on the pad and then drag them up or down to increase or decrease the volume.
Figure 5. Different gesture interactions for the Stratos (left) and trackpad interfaces (right). (a) finger interaction, (b) swipe interaction, (c) grid interaction, (d) confirm, (e) back, (f) volume.
Figure 5. Different gesture interactions for the Stratos (left) and trackpad interfaces (right). (a) finger interaction, (b) swipe interaction, (c) grid interaction, (d) confirm, (e) back, (f) volume.
Applsci 13 09901 g005
Every time the Stratos registers an interaction, mid-air haptic feedback notifies the user; this notification consists of a brief (200 ms) push of air on the center of the user’s palms. The feedback was immediately given when the gesture was considered complete for all the interactions except the grid and volume. At the same time, in the case of the grid interaction, the feedback was given each time the hand changed the quadrant, and in the volume gesture, it was given continuously for its duration. In the case of the trackpad interface, no additional haptic feedback was provided.

2.3. Driving Simulator

To validate the proposed interaction system, a virtual simulation was implemented on the iDrive driving simulator [17] (Figure 6) of Politecnico di Milano by assessing the effectiveness of the GUI and the overall user experience (UX). The car simulator reproduces the windshield using three screens communicating with Ultrahaptics Stratos and the trackpad. The simulation was developed with the Unity 3D game engine [18] and consists of a virtual autonomous car moving in a city. The GUI of the proposed interaction system has been developed with the UI features of Unity 3D, and the input and output interactions have been implemented with the proper Software Development Kit (SDK).

3. Test Campaign

During the journey in the autonomous vehicle, users were required to complete five tasks. The test vehicle is traveling on a two-lane road in an area with no traffic or pedestrians within the city. The interaction between the vehicle and its surroundings is the central focus of this scenario because it provides information for the interaction system. Upon completion of a task, a sound would notify the users that the task was done. Additionally, all open menus would close, and the next task would automatically start in 10 s. If the users finish the task 5 min after it starts, it will be marked as failed automatically. The test will close all open menus and proceed to the next one within 10 s as if the task was completed. The following details the five tasks proposed to the users in the order given below:
  • Start: Set the car’s destination and confirm the trip starting the car; the interactions used were finger interaction, swipe, and confirm;
  • Music: Select a song and then adjust the music volume; with finger interaction, swipe, confirm, and adapt them with the pitch gesture;
  • Call: Take a call and close it with two swipe interactions;
  • File Explorer: Browse the file menu to find, open, and skim a presentation file using the finger interaction, swipe, grid, and confirm;
  • POI: Select a Point Of Interest from the specific menu using the finger gesture, the grid interaction, and the confirm gesture.
After the first task, the car would drive autonomously around the virtual city. During the test, the time and errors performed by the users in completing each task were collected. The Pupil Core eye-tracking device [19] tracked the instances when users glanced at the input rather than the windscreen interface. The goal was to observe how individuals interact with their devices and determine if frequent glances at the screen lead to decreased attention. To do this, two areas of interest (AOI) were defined using printed markers around the simulator central monitor and the interaction area (Stratos or trackpad). To make the calibration process of the eye-tracker easier, it opted for a nine-point calibration technique using natural features. It used specific points identified on the iDrive simulator, as depicted in Figure 7.
For the test, a between-subject design was planned. The subjects were divided into two groups. One group used the Stratos, the other the trackpad. Both groups had access to the same windscreen display GUI. Figure 8 shows a subject during the execution of the test on the iDrive simulator with the Stratos device. The entire test took around 50 min and consisted of the following steps:
  • The participants were introduced to the research and its objectives. They were then requested to complete an anonymous form, which included general details such as age range and nationality and specific information on their driving practices and familiarity with the interaction mode;
  • The subject performed warmup activities to become familiar with the system. During the warmup, the users used a blank scenario with only the GUI and were guided by a moderator on controlling the interface. When all the gestures were explained, the users had five more minutes to explore the system and gain confidence freely, as proposed in similar studies [20,21];
  • The subjects were asked to wear the eye-tracking device; then, its calibration was performed;
  • The actual test starts; the subjects were asked to follow the instructions given by a pre-recorded neutral voice, repeated only once at the beginning of the task. Eye-tracking data, time, and errors were monitored during the test;
  • After completing all the tasks, the participants were instructed to complete the two questionnaires (Raw NASA-TLX and AttrakDiff). Afterward, a short and spontaneous discussion was held to debrief.
Figure 8. A subject during the execution of the test on the iDrive simulator with the Stratos device.
Figure 8. A subject during the execution of the test on the iDrive simulator with the Stratos device.
Applsci 13 09901 g008
Finally, we selected two standardized questionnaires to compare the touchless gestural interface to the trackpad in terms of perceived workload, performance, and interaction experience: the Raw NASA-TLX [22] and the AttrakDiff [23], as proposed in [24].

Participants

The test involved 20 users, divided into two groups, one performing on the mid-air haptic interface (Stratos) and another performing on the trackpad (see Table 1). Thirteen subjects were male (65%), seven were female (35%), all were aged between 20 and 29 (mean: 23.7, SD: 3.06), and all had a driving license. Of the first group of 10 subjects, 3 participants (30%) had prior experience using hand-tracking interfaces similar to the one used by the Stratos device. In the second group, 9 out of 10 (90%) of participants commonly rely on trackpad interfacing in their daily lives, confirming the assumption that the trackpad can be a reference point for this test. Table 1 summarizes the descriptive statistics of the considered sample.

4. Test Results and Insights

This section presents test results and insights from analyzing raw data and statistics. The broader implications of these valuable insights and potential applications will also be explored.

4.1. Task Success and Time

Overall, most of the participants were able to complete the experiment. The task was marked as failed (negative) automatically when subjects could not finish it on time. During the fourth task involving the Stratos in the file explorer, two users needed more time to finish the task due to incorrect user input. In the first task, the start task, one Stratos user and four trackpad users selected the wrong menu item, resulting in task failure. The results for the completion status of each task are presented in Table 2.
The average completion time for the task between the two groups is similar (Figure 9). However, the post hoc data analysis does not statistically confirm the assumption.
Due to the limited number of subjects and the non-normality distribution of the sample, a Mann–Whitney U test with a significant difference α = 0.05 was performed to check the null hypothesis (H0). As shown in Table 3, the p-value is always higher than α, and the null hypothesis cannot be statistically confirmed. It is essential to mention that we examined only the time of successful users when considering the file explorer task, removing the instances of the only two participants who could not finish the task within 5 min.

4.2. Number of Distractions Caused by the Interface

The eye tracking data shows how often the users looked at the input device (Stratos or trackpad) and thus looked away from the windshield interface (and the road). Generally, the Stratos has been looked at less than the trackpad, as shown by the mean of the Stratos, which is lower than the trackpad (see Figure 10). However, this data is insignificant in determining a difference between the two groups (W: 39.50, p: 0.442—Mann–Whitney U test).

4.3. Perceived Physical and Cognitive Load

Figure 11 shows the results of the Raw NASA-TLX, used to address the perceived physical and mental demand that the different interfaces required. With Stratos, the user felt more frustration and effort in completing the tasks than the touch counterpart and felt a higher physical load when using the Stratos interface. After observing the users, we learned they needed to raise their hands 20 cm above the device to interact with it. This differs from the trackpad, where users keep their hands on the armrest. The users also confirmed this consideration in the interview after the test. Although Stratos had a slightly higher mental load demand, the user felt more performative. However, for all the survey data, a significant difference (α = 0.05) between the two groups cannot be confirmed for all the scales (see Table 4). Even here, the Mann–Whitney U test was used instead of the student’s Independent Samples T-test due to the non-normality of the data sample.

4.4. Attractiveness

To assess the attractiveness and overall user experience of both systems, we utilized the AttrakDiff survey. Figure 12a shows that the two systems are both in the desirable sphere with one variation: the Stratos is perceived as more self-oriented, while the trackpad is task-oriented. The Stratos has a higher hedonic quality than the trackpad; the pragmatic quality (PQ) and the attractiveness (ATT) are equal, whereas in PQ, the Stratos is more direct, and in ATT it is more pleasant, as seen in Figure 12b. In the hedonic stimulation quality (HQ-S), the Stratos is higher, with a gap of almost one point compared to the tablet. The Stratos also turns out to be more challenging than the trackpad, although this figure is only somewhat relevant since it is particularly close to zero. The only note of interest in the hedonic identification quality (HQ-I) is how the Stratos are perceived as more professional and premium.
The minimal discrepancy observed between the two interfaces can likely be attributed to the novelty effect that users encountered while interacting with the trackpad. Despite this interface being commonplace now, it was perceived as a fresh and innovative method of interaction within the car, as reported by users during the post-test interview. Indeed, when asked what interface the users have used at least once inside the vehicle, only one said he used the trackpad. The other participants have yet to use this technology inside a car.

4.5. Post-Test Insights: Unstructured Interviews

After the test, unstructured interviews were conducted to gather qualitative impressions from users regarding their experience. The issues raised by the users can be summarised as follows:
  • Lack of feedback: Users requested more precise feedback from the interface as they often had a perception gap regarding whether they had acted correctly. Particularly for gestures, users emphasized the need for more timely feedback, such as sound or visual references, to help them be more efficient during interaction with the interface;
  • UI/UX issues: Several users needed clarification on the menu icons, mistaking the POI icon for the destination icon. Inconsistencies in the interface, such as visual indicators and transparency effects for scrolling, also confused and disrupted users’ familiarity with the interface;
  • Responsiveness and complexity of gestures: Certain gestures were more accessible to execute than others. The “grid interaction” proved to be the most challenging and unnatural to master compared to the “swipe interaction”, which is more intuitive and aligns with users’ familiarity with digital devices. On the other hand, the “pinch interaction” for music control received positive feedback, mainly due to the immediate feedback that follows the gesture;
  • Hand position and finger usage: Some users needed clarification on whether they had to maintain a specific hand position for the device to detect their hand or if they could relax. Additionally, prolonged haptic feedback was found to be bothersome by some users. Moreover, gestures that required all five fingers posed an accessibility challenge.
Regarding the positive aspects they can be summarised in the following way:
  • Appreciation for the look and feel: Users found the interface unobtrusive, taking up minimal space and conveniently placed on the windshield, allowing them to focus on the road without distraction. They also appreciated the minimalist design, color scheme, and overall sense of calm it provided;
  • Perception of innovation: The proposed interaction system has received positive feedback from users, particularly the gestures that have been implemented. The novelty effect was also apparent with the trackpad, even though it is a more commonly used device;
  • Haptic feedback: Users appreciated the haptic feedback, which often guided them during gesture performance, providing a sense of touch and confirming that they had successfully executed the requested actions.

5. Discussion

Based on the results obtained from the tests conducted and the data collected, as well as considering the current state of the art, we observed that some users encountered challenges while utilizing both proposed interfaces. However, it is essential to note that a significant level of user engagement accompanied these difficulties. As anticipated in the introduction, one of the main areas of concern about certain aspects of the graphical user interface led to users’ confusion, resulting in the access of unnecessary interface parts that were not essential for completing tasks. Nonetheless, the impact of this issue on overall performance was not substantial. The efficiency and intuitiveness of the proposed interaction compensated for the additional menu navigation, which has been previously corroborated by several studies [9,15,20]. These findings further reinforce the notion that touchless and non-touchless interactions are comparable regarding usability and distraction.
However, the perceived physical fatigue divergence between the two interaction methods was a notable distinction. The touchless interaction was more tiring, likely attributed to the specific prototype employed in the experiment. The practical and developmental constraints compelled users to maintain a particular hand and wrist position throughout the study, a factor also highlighted by May et al. [10]. Despite the increased physical strain, this did not adversely affect the attractiveness of the technology. Indeed, users found the touchless interface more user-friendly and perceived it as a premium feature with a better user experience.
Participants similarly evaluated the workload, performance, and interaction experience of both devices. Even though they had varying prior experiences in different fields, only one participant had used the trackpad in a vehicle. Therefore, it can be concluded that both devices are new to the automotive sector, and previous experiences do not affect first-time interaction in this context. As a result, both devices can be effectively used in the automotive field.
These findings underscore the significance of considering usability and physical implications when designing touchless interfaces. At the same time, there may be a trade-off regarding physical fatigue; the enhanced user-friendliness and perceived value of touchless interactions contribute to an overall positive user experience. These considerations may be limited by the small sample size, which could affect their applicability to a broader population. A within-subject test design could have increased the results’ reliability. However, in preliminary pilot tests, participants found the overall test duration of about 2 h too long, making this option unviable.
To optimize this technology, further research and development efforts should prioritize refining ergonomics and addressing the physical demands of touchless interaction. Including more eye movement indicators or participant takeover performance could improve the meaningfulness of the results. Striving for a balance between usability and user comfort will ultimately enhance the acceptance and desirability of these interfaces.

6. Conclusions

This study presents the development and evaluation of an innovative interface design for shared autonomous vehicles. This interface utilizes the car windshield as a display surface to provide infotainment and real-time information about the surrounding environment. A gestural touchless input system was proposed to interact with the graphic user interface. All the interfaces and user experiences were designed and developed with an iterative process considering the gesture interaction models.
The input interface was compared to a familiar trackpad interface to validate this system. A virtual simulation was employed to test the project to compensate for the limitations of autonomous driving applications and windshield technology. The study involved twenty subjects where the user experience of the input interfaces was addressed with a specific focus on usability, engagement, physical fatigue, and perception. The findings indicated that while some users faced challenges with both interfaces, they remained highly engaged with comparable performance between the two interaction interfaces. Notably, there was a discernible difference in perceived physical fatigue, with touchless interaction being more tiring. This was primarily due to the prototype used in the experiment, which required users to maintain a specific hand and wrist position throughout the study, leading to increased physical strain. Nevertheless, users found the touchless interface to be more user-friendly and engaging.
These findings underscore the importance of considering usability and physical implications when designing touchless interfaces. To optimize this technology, further research and development efforts should focus on refining ergonomics and addressing the physical demands associated with touchless interaction. Striving for a balance between usability and user comfort will ultimately enhance the acceptance and desirability of these interfaces.
Finally, we believe that touchless interfaces have the potential to provide a positive user experience and to be successfully used in the automotive field, despite the challenges and physical implications involved, being that the perceived value of touchless interactions and the comparable performance with similar technologies outweigh its drawbacks. So, by addressing the ergonomic aspects and optimizing the physical demands, touchless interfaces can become even more widely accepted and spread as an input interface.

Author Contributions

Conceptualization, F.G., F.B., V.A. and G.C.; Methodology, F.B., V.A. and G.C.; Software, P.B. and A.P.; Investigation, P.B., A.P. and F.C.; Resources, F.C. and F.G.; Data curation, P.B. and A.P.; Writing—original draft, P.B., A.P. and F.C.; Writing—review & editing, V.A. and G.C.; Visualization, F.C. and F.G.; Supervision, F.B., V.A. and G.C.; Funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of the project BASE5G (https://www.base5g.polimi.it/, accessed on 14 July 2023) funded by Regione Lombardia POR FESR 2014–2020.

Institutional Review Board Statement

All subjects gave informed consent for inclusion before participating in the study. The study followed the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Politecnico di Milano (protocol code 35/2020, 2 December 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This research was supported by the i.Drive Lab (http://www.idrive.polimi.it/, accessed on 14 July 2023). The authors thank all the survey participants for investing their time in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nichol, R.J. Airline Head-Up Display Systems: Human Factors Considerations. Int. J. Econ. Manag. Sci. 2015, 4, 248. [Google Scholar] [CrossRef]
  2. Tonnis, M.; Lange, C.; Klinker, G. Visual Longitudinal and Lateral Driving Assistance in the Head-Up Display of Cars. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 91–94. [Google Scholar]
  3. Halin, A.; Verly, J.G.; Van Droogenbroeck, M. Survey and Synthesis of State of the Art in Driver Monitoring. Sensors 2021, 21, 5558. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, H.; Zhao, F.; Wang, W.; Jiang, X. Analyzing Drivers’ Attitude towards HUD System Using a Stated Preference Survey. Adv. Mech. Eng. 2015, 6, 380647. [Google Scholar] [CrossRef]
  5. Panasonic Drives You. Available online: https://na.panasonic.com/us/news/panasonic-automotive-brings-expansive-artificial-intelligence-enhanced-situational-awareness-driver (accessed on 14 July 2023).
  6. Stecher, M.; Michel, B.; Zimmermann, A. The Benefit of Touchless Gesture Control: An Empirical Evaluation of Commercial Vehicle-Related Use Cases. In Advances in Human Aspects of Transportation. AHFE 2017. Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2018; pp. 383–394. [Google Scholar]
  7. Survey on 3D Hand Gesture Recognition|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7208833 (accessed on 23 May 2023).
  8. Häuslschmid, R.; Osterwald, S.; Lang, M.; Butz, A. Augmenting the Driver’s View with Peripheral Information on a Windshield Display. In Proceedings of the 20th International Conference on Intelligent User Interfaces, Atlanta, GA, USA, 29 March–1 April 2015. [Google Scholar]
  9. Ohn-Bar, E.; Tran, C.; Trivedi, M. Hand Gesture-Based Visual User Interface for Infotainment. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Portsmouth, UK, 17–19 October 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 111–115. [Google Scholar]
  10. May, K.; Gable, T.; Walker, B. A Multimodal Air Gesture Interface for In Vehicle Menu Navigation. In Proceedings of the Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–6. [Google Scholar]
  11. Ferscha, A.; Riener, A. Pervasive Adaptation in Car Crowds. In Mobile Wireless Middleware, Operating Systems, and Applications-Workshops. MOBILWARE 2009. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Berlin/Heidelberg, Germany, 2009; Volume 12, pp. 111–117. [Google Scholar]
  12. Brand, D.; Büchele, K.; Meschtscherjakov, A. Pointing at the HUD: Gesture Interaction Using a Leap Motion. In Proceedings of the Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 167–172. [Google Scholar]
  13. Lindgren, T. Experiencing Electric Vehicles: The Car as a Digital Platform. In Proceedings of the 55th Hawaii International Conference on System Sciences, Hawaii, HI, USA, 4–7 January 2022. [Google Scholar]
  14. Deng, J.; Wu, X.; Wang, F.; Li, S.; Wang, H. Analysis and Classification of Vehicle-Road Collaboration Application Scenarios. Procedia Comput. Sci. 2022, 208, 111–117. [Google Scholar] [CrossRef]
  15. Liu, A.; Tan, H. Research on the Trend of Automotive User Experience. In Proceedings of the Cross-Cultural Design. Product and Service Design, Mobility and Automotive Design, Cities, Urban Areas, and Intelligent Environments Design, Virtual, 26 June–1 July 2022; Rau, P.-L.P., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 180–201. [Google Scholar]
  16. Haptics|Ultraleap. Available online: https://www.ultraleap.com/haptics/ (accessed on 23 May 2023).
  17. Idrive|Interaction between Driver, Road Infrastructure, Vehicle, and Environment. Available online: https://www.idrive.polimi.it/ (accessed on 27 June 2023).
  18. Unity Real-Time Development Platform|3D, 2D, VR & AR Engine. Available online: https://unity.com (accessed on 27 June 2023).
  19. Pupil Core—Open Source Eye Tracking Platform—Pupil Labs. Available online: https://pupil-labs.com/products/core/ (accessed on 26 July 2023).
  20. Trojaniello, D.; Cristiano, A.; Sanna, A.; Musteata, S. Evaluating Real-Time Hand Gesture Recognition for Automotive Applications in Elderly Population: Cognitive Load, User Experience and Usability Degree. In Proceedings of the Third International Conference on Informatics and Assistive Technologies for Health-Care, Medical Support and Wellbeing HEALTHINFO 2018, Nice, France, 14–18 October 2018; pp. 36–41. [Google Scholar]
  21. Szczerba, J.; Hersberger, R.; Mathieu, R. A Wearable Vibrotactile Display for Automotive Route Guidance: Evaluating Usability, Workload, Performance and Preference. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2015, 59, 1027–1031. [Google Scholar] [CrossRef]
  22. Said, S.; Gozdzik, M.; Roche, T.R.; Braun, J.; Rössler, J.; Kaserer, A.; Spahn, D.R.; Nöthiger, C.B.; Tscholl, D.W. Validation of the Raw National Aeronautics and Space Administration Task Load Index (NASA-TLX) Questionnaire to Assess Perceived Workload in Patient Monitoring Tasks: Pooled Analysis Study Using Mixed Models. J. Med. Internet Res. 2020, 22, e19472. [Google Scholar] [CrossRef] [PubMed]
  23. Hassenzahl, M. Mit Dem AttrakDiff Die Attraktivität Interaktiver Produkte Messen. In Proceedings of the Usability Professionals UP04, Stuttgart, Germany, 2004; pp. 96–102. [Google Scholar]
  24. Rümelin, S.; Butz, A. How to Make Large Touch Screens Usable While Driving. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, The Netherlands, 28–30 October 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 48–55. [Google Scholar]
Figure 1. Panasonic AR HUD showcased at the Las Vegas 2020 CES [5].
Figure 1. Panasonic AR HUD showcased at the Las Vegas 2020 CES [5].
Applsci 13 09901 g001
Figure 2. Comparison of two GUI versions.
Figure 2. Comparison of two GUI versions.
Applsci 13 09901 g002
Figure 3. Finger interaction influences the design of the menu, which features five items.
Figure 3. Finger interaction influences the design of the menu, which features five items.
Applsci 13 09901 g003
Figure 4. Grid matches with the layout of the items inside the menu.
Figure 4. Grid matches with the layout of the items inside the menu.
Applsci 13 09901 g004
Figure 6. The graphical user interface is integrated into a virtual reality simulation on the iDrive simulator.
Figure 6. The graphical user interface is integrated into a virtual reality simulation on the iDrive simulator.
Applsci 13 09901 g006
Figure 7. Order and position of the calibration points for the eye-tracking device and the printed marker used to create the area of interest.
Figure 7. Order and position of the calibration points for the eye-tracking device and the printed marker used to create the area of interest.
Applsci 13 09901 g007
Figure 9. Mean task completion time (in seconds); * the failed attempts are not used to calculate the mean.
Figure 9. Mean task completion time (in seconds); * the failed attempts are not used to calculate the mean.
Applsci 13 09901 g009
Figure 10. Boxplot of the occurrences where the users looked directly at the input interface.
Figure 10. Boxplot of the occurrences where the users looked directly at the input interface.
Applsci 13 09901 g010
Figure 11. Results of the Raw NASA-TLX.
Figure 11. Results of the Raw NASA-TLX.
Applsci 13 09901 g011
Figure 12. Results of AttrakDiff survey (www.attrakdiff.de, accessed on 30 July 2023): Stratos in blue, Trackpad in orange. (a) Portfolio presentation matrix between the Hedonic Quality (HQ) and the Pragmatic Quality (PQ), (b) Description of word pairs subdivided in the main sub-category: Pragmatic Quality (PQ), Hedonic Identification Quality (HQ-I), Hedonic Stimulation Quality (HQ-S) and Attractiveness (ATT).
Figure 12. Results of AttrakDiff survey (www.attrakdiff.de, accessed on 30 July 2023): Stratos in blue, Trackpad in orange. (a) Portfolio presentation matrix between the Hedonic Quality (HQ) and the Pragmatic Quality (PQ), (b) Description of word pairs subdivided in the main sub-category: Pragmatic Quality (PQ), Hedonic Identification Quality (HQ-I), Hedonic Stimulation Quality (HQ-S) and Attractiveness (ATT).
Applsci 13 09901 g012
Table 1. Sample descriptive statistics.
Table 1. Sample descriptive statistics.
StratosTrackpadTotal
Sample size 10-10-20-
GenderFemale440%330%735%
Male660%770%1365%
Dominant handRight990%990%1890%
Left110%--15%
Both--110%15%
Has a car driving license?Yes10100%10100%20100%
No------
How often do you drive?Daily110%660%735%
Weekly550%220%735%
Monthly--110%15%
Sometime440%110%525%
Have you already used this type of interface?Yes330%990%1260%
No770%110%840%
Table 2. Completion status per task.
Table 2. Completion status per task.
Input ModalityTaskPositiveNegative
StratosStart990%110%
Music10100%--
Call10100%--
File Explorer880%2 *20%
POI10100%--
TrackpadStart660%440%
Music10100%--
Call10100%--
File Explorer10100%--
POI10100%--
* These were automatically set as negative because the users took more than five minutes to complete the task.
Table 3. Mann-Whitney U test for the task time.
Table 3. Mann-Whitney U test for the task time.
TaskWp
Start52.50.821
Music45.50.762
Cell52.50.880
* File explorer29.50.374
POI49.00.971
No value is significant (p < 0.05). * Failed attempts are not used in the calculation.
Table 4. Mann-Whitney U test for the Raw NASA-TLX.
Table 4. Mann-Whitney U test for the Raw NASA-TLX.
NASA-TLX SubscaleWp
Mental Demand700.138
Physical Demand640.291
Temporal Demand380.378
Performance51.50.939
Effort660.237
Frustration73.50.080
No value is significant (p < 0.05).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bellani, P.; Picardi, A.; Caruso, F.; Gaetani, F.; Brevi, F.; Arquilla, V.; Caruso, G. Enhancing User Engagement in Shared Autonomous Vehicles: An Innovative Gesture-Based Windshield Interaction System. Appl. Sci. 2023, 13, 9901. https://doi.org/10.3390/app13179901

AMA Style

Bellani P, Picardi A, Caruso F, Gaetani F, Brevi F, Arquilla V, Caruso G. Enhancing User Engagement in Shared Autonomous Vehicles: An Innovative Gesture-Based Windshield Interaction System. Applied Sciences. 2023; 13(17):9901. https://doi.org/10.3390/app13179901

Chicago/Turabian Style

Bellani, Pierstefano, Andrea Picardi, Federica Caruso, Flora Gaetani, Fausto Brevi, Venanzio Arquilla, and Giandomenico Caruso. 2023. "Enhancing User Engagement in Shared Autonomous Vehicles: An Innovative Gesture-Based Windshield Interaction System" Applied Sciences 13, no. 17: 9901. https://doi.org/10.3390/app13179901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop