Next Article in Journal
Sparse Temporal Data-Driven SSA-CNN-LSTM-Based Fault Prediction of Electromechanical Equipment in Rail Transit Stations
Previous Article in Journal
Exploring the Effects of Multi-Factors on User Emotions in Scenarios of Interaction Errors in Human–Robot Interaction
Previous Article in Special Issue
Exploring the Impact of Virtual Reality on Painting Appreciation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

V-Cockpit: A Platform for the Design, Testing, and Validation of Car Infotainment Systems through Virtual Reality

by
Michela Papandrea
1,*,
Achille Peternier
1,
Diego Frei
1,
Nicolò La Porta
1,2,
Mirko Gelsomini
1,
Daniele Allegri
3 and
Tiziano Leidi
1,*
1
Institute of Information Systems and Networking (ISIN), University of Applied Sciences and Arts of Southern Switzerland (SUPSI), 6962 Lugano, Switzerland
2
Faculty of Informatics, Università della Svizzera Italiana (USI), 6900 Lugano, Switzerland
3
Institute of Systems and Applied Electronics (ISEA), University of Applied Sciences and Arts of Southern Switzerland (SUPSI), 6900 Lugano, Switzerland
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8160; https://doi.org/10.3390/app14188160
Submission received: 30 July 2024 / Revised: 4 September 2024 / Accepted: 6 September 2024 / Published: 11 September 2024
(This article belongs to the Special Issue Human–Computer Interaction and Virtual Environments)

Abstract

:
The V-Cockpit platform aims to transform the design, testing, and validation of car infotainment systems from the physical realm to virtual reality. It uniquely integrates various aspects of the creative phases—from conceptualization to evaluation—streamlining the process and reducing time and costs compared to traditional methods that focus on individual activities and rely heavily on physical prototyping. This technical note provides a comprehensive overview of the main platform’s aspects, highlighting the integration of hardware and behavioral analysis algorithms to improve user experience and detect potential design flaws early on. The V-Cockpit platform, composed of six key components, leverages virtual reality and digital twins, promising significant cost savings and enhanced design efficiency. This work details the system architecture, implementation, and first benefits of this innovative approach through the analysis of three use cases.

1. Introduction

In the rapidly evolving automotive industry, the design and validation of car infotainment systems pose significant challenges [1,2]. Traditional methods rely heavily on physical prototypes, which are time-consuming and costly [3], often requiring an initial concept to pass through multiple stages (such as mockup creation and real hardware installation) and departments (including 3D printing, quality control, and field testing) to gather feedback and identify possible issues. To address these challenges, the V-Cockpit project aims to improve this process by reducing dependencies and constraints connected to physical equipment and the real world through the use of virtual reality (VR) technology. By integrating real and simulated hardware along with behavioral analysis algorithms, V-Cockpit seeks to enhance the productivity of the design and prototyping phases associated with developing the car console and its interface with the in-vehicle infotainment (IVI) system.
The primary objective of the V-Cockpit project is to develop a VR system using digital twins to shorten the design validation time and reduce the associated costs. Although precise figures are challenging to predict [4], the project advances the sector by improving decision-making and reducing the non-recurring expenses (NREs) related to bringing a new IVI system to the market.
By running the IVI software on its native platform within the VR environment, V-Cockpit ensures compatibility with embedded hardware constraints, thereby enhancing the flexibility and applicability of the design process.
In this technical note, we provide an overview of the V-Cockpit platform’s system architecture, implementation, and anticipated benefits.
This note’s main contributions are as follows:
  • We introduce the V-Cockpit project and offering a high-level perspective on the platform as a whole. Future publications will explore specific aspects in greater detail and reference this note for context.
  • We demonstrate how a typical cockpit design loop can be executed using our platform by splitting it into three representative use cases.
  • We present an initial analysis of the benefits and potential of our approach as evidenced by its application across these three use cases.
The following sections will examine the context and state of the art related to the topic, detail the materials and methods used to develop the V-Cockpit platform, present the results achieved, and discuss their implications for the future of car infotainment system design, before concluding with recommendations for potential future research.

Context and State of the Art

Designing car infotainment systems typically involves multiple iterations of physical prototypes to ensure that the final product meets the expected user experience and usability standards [5]. This process is not only expensive but also extends the time required to bring new products to market. Additionally, the iterative nature of physical prototyping can lead to inefficiencies and the delayed identification of design flaws. The need for a more efficient and cost-effective solution is evident.
Virtual reality offers a promising alternative to traditional prototyping methods. By creating a virtual environment where designers can interact with digital twins of various components, VR allows for continuous, immersive interaction with the design. Such an environment facilitates immediate feedback and adjustments, streamlining the design process [6]. Furthermore, VR can integrate sensors and human physiological monitoring to provide automatic emotional feedback, enabling early assessment of the user experience and safety [7].
The advent of industrial metaverses [8] has ushered in a new era where the boundaries between physical and virtual spaces blur, fostering collaborative innovation across diverse industries [9]. These metaverses leverage immersive technologies to create dynamic interactive environments, becoming incubators for groundbreaking applications [10,11]. In automotive design, industrial metaverses offer a playground for reimagining traditional processes, introducing efficiencies, and promoting cross-disciplinary collaboration [12].
Within the automotive sector, the integration of immersive technologies has become increasingly prevalent [13,14]. Extended reality (XR) is shaping the way vehicles are conceptualized, tested, and validated [15], while designers and engineers now leverage these technologies to visualize, iterate, and refine vehicle components in virtual environments before transitioning to physical prototypes [16,17].
As a focal point within the immersive automotive landscape, the concept of virtual cockpits has gained prominence. Virtual cockpits represent a shift from conventional design methodologies, allowing for the creation of digital twins that replicate physical components in virtual spaces. This evolution facilitates rapid prototyping, reducing time to market and providing designers with immediate, interactive feedback on the ergonomics, safety, and aesthetics of IVI systems [18].
In the pursuit of refining automotive design, behavioral analysis algorithms and digital twins have emerged as critical components [19]. Behavioral analysis adds a layer of human-centric evaluation, capturing user interactions within virtual cockpits. Digital twins, representing exact replicas of physical components, enable a seamless transition between virtual and physical realms, fostering iterative design processes.
Despite the strides made in the convergence of industrial metaverses and automotive design, challenges persist [20]. Ensuring seamless integration between real and virtual components, addressing ethical considerations, and establishing industry-wide standards remain focal points. However, these challenges present opportunities for innovation, collaboration, and the creation of a new design paradigm. In this dynamic landscape, the V-Cockpit project redefines the traditional boundaries of car cockpit design. This project not only addresses current challenges but sets the stage for a future where immersive technologies help push the automotive industry into new levels of efficiency, creativity, and collaborative design.

2. Materials and Methods

The V-Cockpit platform is built upon a modular and scalable system architecture (Figure 1) designed to facilitate the seamless integration of various components, both virtual and physical. This leads to the creation of a cloud-supported ecosystem that pivots around VR to cover aspects such as 3D asset migration and configuration, testing of the current car cockpit design through different simulated scenarios, user-based data collection and analysis to derive high-level indicators about the effectiveness of the prototype, and a web dashboard for comfortably interacting with the generated output through a browser.
All of this functionality is organized into six core components, each dedicated to a specific subset of the platform’s overall capabilities.
These components are a series of 3D/CAD software plugins (1), the core VR system (2), the hardware interface for communicating with a real IVI board (3), the cloud-based infrastructure (4), the data analytics instruments (5), and the web dashboard for presenting results (6).
In the remainder of this section, we detail these six core components and then focus on how they are utilized to implement three main features (presented as use cases) that our system offers to its users.

2.1. System Core Components

The V-Cockpit system architecture is divided into six core components, described below:
  • Third-Party CAD/3D Editing Software Plugins: To streamline the seamless progression from the design phase to the virtual environment, a series of specialized instruments serve as connectors, bridging third-party CAD/3D editing systems and the IVI development software. Such instruments are implemented as plugins developed for popular CAD/3D editing software such as Dassault Systèmes SolidWorks® 2021 SP2.0 and Blender 3.5.0 to facilitate the direct migration of 3D models. An ad hoc file exchange format is used to store the relevant information and to optimize 3D data for real-time performance within a VR context. This integration significantly improves the overall workflow, fostering a more efficient and cohesive transition throughout the development process.
    Moreover, the seamless integration facilitates a dynamic exchange of data and insights between the design and testing phases. Engineers and designers can interact with virtual prototypes in a more immersive and intuitive manner, gaining valuable perspectives that may not be evident in traditional design workflows. This iterative feedback loop, empowered by the interconnected software tools, promotes a collaborative and innovative approach to refining designs with V-Cockpit-related instruments natively integrated into the working environment that users are already familiar with.
  • VR Client: The heart of the system lies in the VR simulation engine, responsible for rendering realistic 3D environments and tracking the user experience and performance [21]. This is possible thanks to Unity, a cross-platform game engine used to create 3D real-time applications, that also provides support for VR development [22]. Leveraging its advanced graphics and physics engines, this component immerses users in a dynamic and lifelike representation of the car cockpit. It enables real-time interaction, customization, and testing of various design elements, including the ones authored in CAD/3D editing software and imported through our ad hoc exchange format. Key to reducing time-to-market is a digital twins module, which creates virtual replicas of multiple console components within the car cockpit. This includes intricate details of infotainment surfaces, knobs, levers, controllers, and other relevant elements. These digital twins serve as the foundation for rapid prototyping and iterative design processes.
    The VR client also implements a basic car simulation and scripting system for the creation and replication of different scenarios. All of this is made possible thanks to the Unity XR Interaction Toolkit, which is employed to manage user interactions, including gaze, gesture, and controller inputs. Scripts are developed to handle lifecycle events, input/output events, and interaction behaviors, ensuring a smooth and responsive user experience.
  • Hardware Integration: To ensure a user experience as close as possible to the real world, it is essential that the IVI system is run directly on the hardware foreseen for the real cockpit. For this purpose, a dedicated component, which allows the IVI software to run seamlessly on its native platform, is integrated into V-Cockpit. Custom drivers and middleware are provided to facilitate communication between the VR client and embedded hardware. This integration is tested extensively to ensure reliability and accuracy.
  • Cloud Backend/Middleware: The cloud backend manages data storage, processing, and analytics. It utilizes REST APIs for communication with various system components, ensuring data integrity and availability. The middleware handles tasks such as model conversion, data synchronization, and real-time updates. It serves as a centralized repository for VR assets, simulation and scenario results, and processed biometrics and analytics. This cloud infrastructure enables the efficient storage, retrieval, and sharing of data.
  • Analytics: Data collected during simulations and scenarization are automatically processed by means of AI-driven pipelines that extract a series of high-level metrics about the user behavior, performance, and interaction with the current cockpit design [23]. The extracted metrics provide great insights in terms of usability, comfort, safety, and efficiency of the actual cockpit and infotainment system design and configuration. The analytics component relies on the cloud backend for the acquisition and storage of its data and for executing its processing pipelines. Analytics is executed as a post-processing triggered after each VR experience and/or on demand, and its output can be navigated through the dashboard component.
  • Dashboard: The web dashboard is an interactive interface that provides users with a consolidated view of complex information, presented through visualizations like charts, graphs, and high-level metrics. Central to V-Cockpit’s methodology is the human-in-the-loop model, which orchestrates the design and testing phases around user interaction. Divided into four main steps (namely, car cockpit design, IVI software customization, scenarization, and simulation), this model ensures a collaborative and iterative approach involving both engineers and end users: the dashboard is the instrument where the information gathered and analyzed through these four steps is made available. It helps in identifying trends, potential issues, and areas for improvement.

2.2. System Communication

The six core components are interconnected modules that communicate with each other and with external elements using different interfaces and standards (see Figure 2).
The cloud backend is a pivotal component of the V-Cockpit system, responsible for data management and processing. The backend infrastructure is built on scalable cloud services, providing robust storage and computational capabilities. REST APIs are used for seamless communication between the VR client (2), third-party plugins (1), dashboard (6), and the cloud backend (4). The system supports the structuring of different activities into projects with various user types to handle administrative roles, design tasks, and to manage customer testing, with access to backend services granted through user logins and JWT authorization. This approach ensures secure and controlled interactions tailored to the specific roles and responsibilities of each user.
Data synchronization between the VR client (2) and cloud backend (4) ensures that all components are up to date with the latest changes. This synchronization is crucial for collaborative design processes, where multiple stakeholders might be working on the same project simultaneously. Immediate feedback and iteration enhance the overall efficiency of the design process.
A communication layer (h), consisting of custom drivers and middleware, facilitates communication between the VR client (2) and both physical (b) and emulated (a) hardware. This layer ensures that inputs from physical controls are accurately reflected in the virtual environment and vice versa. Custom-developed hardware boards (3) convert automotive FPD-Link III video streams, generated by the embedded board (b) running the native IVI software, to HDMI signals which are captured by frame grabbers (f) and transferred to the VR client (2). Interaction with the embedded board is supported over Ethernet or CAN Bus (g), depending on the target embedded system. This bidirectional communication is essential for creating a realistic and interactive user experience. In the case of emulated hardware (a), the communication layer also supports video streaming (c) and manages event streams (d) to enable interaction.
External sensors (e) play a crucial role in supporting analytics. Devices such as Bitalino use Bluetooth to transmit physiological signals to the VR client during simulations. Additionally, if available, gaze sensors embedded in VR headsets like the HTC Vive Eye Pro or Meta Quest Pro are supported, providing precise tracking of the gaze (user’s eye movements) to further enrich analytics.

2.3. System Features and Selected Use Cases

The six core components and the communication layers are combined to provide the required functionality addressing one or more of the aspects mentioned in the introduction. In the remaining of this technical note, we focus on three of them to highlight the contributions the V-Cockpit platform brings in this regard by first describing them, then by showing the results we obtained, and finally by discussing their impact and relevance. The three main aspects we selected are (1) the creation of a new cockpit design, (2) the testing and validation of such a design, and (3) the automatic feedback the system provides by gathering and analyzing data collected during experiments. These three aspects are representative steps closing the loop that progresses from the initial conceptual idea about a new design to the collection of the required information to further refine it in a successive iteration.

2.3.1. Creation of a New Design

The creation of a new cockpit design starts within the different departments responsible for the sketching and modeling of the various car components, ranging from the designing of levers, knobs, panels, and other parts through a CAD software to the drafting of the car dashboard. These steps are performed using standard third-party CAD/3D editing software and, thanks to our plugins (core component 1), contents are exported and made available for usage within the V-Cockpit platform. If the new content can be easily mapped to an existing template (e.g., a lever or a flat display by using specific naming conventions and settings), the operation is completed in just a few mouse clicks, as the binding of the different meshes to their physical and semantic properties is carried out automatically. If the new component is non-standard and/or particularly complex (e.g., a steering wheel with many thumb-operated knobs), the output of the plugins can be edited within Unity 3D through a specific SDK (also part of V-Cockpit) that simplifies the application of dynamic properties to a new asset.
Assets created in such a way are then automatically managed by the cloud backend (core component 4) and can immediately be loaded in the virtual environment (through core component 2) for setting up a new cockpit, by integrating elements coming from different software and designers. Such elements can also be manipulated in terms of their location, proportion, material, and functional properties directly within the VR client itself.
Once the physical layout of the car interior is defined, the different displays and functions are wired to the IVI system application through a series of events that are translated and dispatched transparently by the communication layer. The virtual cockpit prototype is now ready for the successive step as follows.

2.3.2. Testing and Validation of a New Design

The design prototype can be tested through a series of simulations involving static (parked car) and dynamic (simplified driving) scenarios, which are also part of core component 2. The user is tasked to interact with the cockpit in different ways and contexts to assess various performance, security, and comfort metrics through the different scenarios (needed by core components 5 and 6). In this phase, the IVI software is run either on an emulator or on the real hardware (via core component 3). The integration of real-world hardware components with the VR environment is a critical aspect of the V-Cockpit project, as this integration allows designers to test and validate the functionality of physical controls within the virtual prototype. Sensors and physiological monitoring devices are incorporated to capture user feedback, providing valuable data on user comfort and experience. Such data are at the core of the analysis and feedback provided through the following step.

2.3.3. Data Collection and Analysis of a New Design

The data collection is performed during the execution of experiments running specific scenarios (in core component 2).
Physiological monitoring devices integrated into the VR system provide the input data for an automatic emotional feedback by measuring parameters such as heart rate, blood volume pulse, skin conductivity, muscle contraction (electrical activity), and respiration activity. The physiological signals are analyzed to retrieve performance metrics such as involvement, focus, stress, and arousal (i.e., level of activation of the user). If available, the user gaze is used to quantitatively measure the time spent looking at the road, traffic, and potential obstacles, and the time spent looking at the different cockpit elements while interacting with them (to complete the tasks required during the simulation scenarios). This allows the precise measuring of the reaction and execution time associated to the specific tasks, triggers, and events.
Analytics of the driving performance are calculated in core component 5 by exploiting the collected physiological data, gaze, logs of interactive activities performed by the user in the cockpit, and related simulation tasks, events, and triggers. These analytics results support the assessment of the emotional impact and overall user experience faced during the different scenarios.
Results are presented through the web dashboard (core component 6) in a simplified and aggregated way, which enables visualizing and navigating the measured data and calculated metrics, offering detailed insights about user and system performance in terms of interaction frequencies, user engagement, and physiological responses. Thanks to this feedback, designers and engineers can make informed decisions on how to refine the cockpit prototype under evaluation.
The output of the analysis can be accessed on a dashboard through a timeline directly associated to the video recording of the simulation (more details in Section 3.3).

3. Results

In this section, we iterate over the three selected use cases introduced above to show how the different features have been implemented.

3.1. Creation of a New Design

Designers and engineers can create 3D assets compatible with the V-Cockpit platform in a few clicks and directly from within the CAD/3D editing software they are used to work with. This also ensures that each content is automatically stored within the project it is connected to, and made visible to the different users that are part of the same group. As an example, we use here a Honda E car and its cockpit (see Figure 3).
This link between the CAD/3D editing world and VR ensures a smooth and efficient flow of design information from its conceptualization in the modeling software to its practical application within the V-Cockpit platform. Through this seamless connectivity, designers and developers can easily transfer 3D models into the virtual environment, where they can be further refined, tested, and optimized for real-world scenarios.
Within the VR realm, users can customize the car cockpit by adding additional functional elements and customizing/parametrizing existing ones. As depicted in Figure 4, the shape and properties of the different displays can be dynamically changed, as can the color and material a given car component is made of. This procedure is made more intuitive in VR thanks to a touch-and-change approach that allows users to modify properties by holding a (virtual) tablet in their hands and by selecting which object to manipulate by touching or pointing it. The parametrization also applies to functional aspects telling the platform what kind of events each component can send, receive, and process. In this way, functionalities such as changing the radio volume using knobs on the steering wheel or activating the turn blinkers are replicated in VR, notified to the IVI system software being used, and contribute to providing a behavior closer to reality and a more immersive overall experience.
During this phase, early feedback can be provided to designers and engineers by taking screenshots of specific details and/or attaching sticky notes with remarks to selected car components (see Figure 5). Such images and notes are linked to the asset that has been imported in the platform and are notified to the original author of the CAD/3D model directly within our custom plugins, where they can be visualized. In this way, feedback is promptly reported and always put in the right context to improve efficiency during the design cycle.

3.2. Testing and Validation of a New Design

Once a new cockpit design has been assembled and parametrized in the previous steps, it can be tested and validated through a set of additional functionality also made available within the VR client. The testing and validation is operated through two main modes: static and dynamic scenarios.
While in the static scenario mode (Figure 6, left image), users are put within the cockpit of a parked car. The cockpit is brought to life by (virtually) wiring the many console components with the IVI system software. The static scenario is useful for making sure that the different cockpit parts have been correctly configured in terms of their position/orientation, functional properties, and aesthetic appearance. Users can take advantage of the static scenario for navigating through the different menus of the IVI system, changing parameters using knobs, levers, and other control surfaces, to verify that everything works as expected. Different outdoor conditions can also be selected during the static scenario simulation to provide different environmental conditions and to see how the current design materials and colors react to ambient light (e.g., day or night light).
In the dynamic scenario mode, the same cockpit is brought to life within a simplified driving simulation, where users have to simultaneously drive the car and interact with the console and IVI system. During this phase, users are asked to perform a series of predefined tasks to simulate common situations that involve using the infotainment system while also keeping track of the traffic conditions. Common tasks involve, for example, picking up a call while driving, dialing a phone number, or changing the radio station. During such simulations, the user performance is constantly monitored through the available sensors to log detailed information about the state of the virtual environment (surrounding vehicle, driving speed, and collisions), the time requested to fulfill each task, and a set of user-related metrics such as heartbeat and gaze. All these pieces of information are the core of the data collected and analyzed in the third use case.
While these scenarios are running, the IVI system software is executed either using an emulator or the real hardware. The bridging of the virtual experience with the IVI system running on the real hardware is enabled thanks to the ad hoc board we designed specifically for this task (see Figure 7).
By running the IVI system on its native platform and mirroring it in the VR environment, the simulation ensures that the IVI will perform as if in a real car running on a street. Consequently, not only is the user experience in terms of visual feedback able to be evaluated but also the workload generated by the infotainment system on the targeted hardware platform for production. This capability is essential for optimizing/tuning the software features of the IVI system and for creating realistic and functional prototypes that accurately reflect the final product’s performance.

3.3. Data Collection and Analysis of a New Design

The analytics component of the V-Cockpit solution allows analyzing physiological and behavioral data collected during the infotainment experience and driving simulations to extract performance indexes and emotional information data. Analytics outputs are calculated both in terms of indexed time series and cumulative metrics. The data streams recorded and ingested as input to the analytics components are gaze (measured directly by the VR headset) and blood volume pulse (measured from a dedicated device called Bitalino, which is a non-invasive physiological data measurement system). Figure 8 shows a high-level representation of the data recorded and processed to generate the mentioned output data.
The two main emotional metrics which are calculated are involvement and focus. These metrics are aligned to the simulation timeline in order to retrieve the user distraction level, the safety, and awareness metrics.

3.3.1. Involvement

The analytics pipeline includes an ad hoc processing component specific to both the raw PPG and ECG signals: this is capable of extracting the inter-beat interval (IBI) signals, exploited to calculate the heart rate (HR) and the heart rate variability (HRV), significant for the driver emotions estimation.
Activities performed during the driving session elicit some change in HRV. In particular, there is a gradual decrement in the subjects’ HRV as a function of the time spent on a certain task [24], so it is reasonable that subjects that perform multiple tasks during a driving session experience a stabilization of HRV when they are asked to perform a certain activity and that their HRV is more prone to change when they pass from a task to another. For these reasons, it was decided to discretize states described by moments of low HRV and moments of higher HRV. This allowed us to generate an involvement index, discretized such that the driver shows higher involvement when the HRV is low.

3.3.2. Gaze Stability

The analytics component also measures the gaze to be able to infer the level of attention/distraction of the driver. The first task of the gaze analytics pipeline involves the cleaning of the gaze signal from saccades (rapid movements of the eyes where the gaze sight shifts from one fixation point to another [25]). We exploit the saccade-free gaze signal to build an indicator to quantify how stable the driver gaze is during the driving session. Gazed objects are divided into inside/outside the car, as well as the time intervals of unstable gaze fixation. In order to quantify fixation stability, the gaze signals are analyzed with a sliding window approach. A 3-second-long sliding window is used to calculate the percentage of stable fixation time over each window. A fixation stability signal is derived from the mentioned approach. By knowing the gaze stability, the tracking car speed (from the simulation recording), and by deriving the driver involvement in the scenario, it is possible to build an indicator of the driver focus on the task. In fact, the greater the driver’s focus is over a certain task, the more stable his/her fixation and the more regular his/her HRV will be. The car speed is used to verify whether the subject is involved in the driving session or whether they are focused on something else.

3.3.3. Distraction

The analytics component has the main goal of identifying when a user cockpit interaction activity generates a distraction for the driver during the driving session. To achieve this goal, we exploit the object observation tracking data.
The objects gazed by the driver are mapped into categories which allow identifying distractive gazing. We define a categorization of potentially distractive objects: in particular, in our scenarios settings, the only non-distractive object is the vehicle windscreen. However, the tracked distractive objects are all vehicle’s mirrors and objects placed inside the car. The objects gazed at by the driver are associated to their category, and the distraction signal is built considering the following assumptions:
  • Gaze track w/category mapping: the only source of non-distraction is the objects observed through the windscreen.
  • Focus (fixation stability): when the driver is highly focused on something (either source of distraction or not), the distraction probability increases.
  • Car speed: the relevant experiment session intervals for the analysis of distraction are related to the vehicle in motion.
  • Involvement: when the driver is highly involved in a task (either source of distraction or not), the distraction probability increases.
An example of a smoothed distraction signal (sliding window) is represented in Figure 9. As is visible in the plot, toward the end of the experiment (interval 80–100 s), the user is experiencing a highly distractive situation, looking inside the car with high involvement (represented by a high percentage of distraction).

3.3.4. Safety and Awareness

Finally, the analytics component aims to determine the driver’s level of awareness during the driving session and assess overall driving safety as defined by the Euro NCAP Assessment Protocol [26]. These metrics are founded onto the concept of visual awareness, which involves the driver’s ability to scan the road and its surroundings effectively, identifying potential hazards, traffic signs, pedestrians, and vehicles. The awareness level can be then categorized as follows:
  • Fully Aware: high level of visual attention.
  • Impaired Driving: reduced ability to perceive and respond to road conditions.
  • Unresponsive: extremely low awareness, often due to fatigue or distraction.
The safety index is designed to provide an overall assessment of the driving safety by combining the awareness index with the vehicle’s speed. A higher awareness index and lower vehicle speed contribute to a higher safety index, indicating a safer driving environment. The safety index S is calculated using the formula:
S = k × A × 1 speed max _ speed
where
  • A is the awareness index mentioned above;
  • speed is the car speed (measured in units like km/h);
  • max_speed is the maximum expected car speed for the given road;
  • k is a scaling coefficient.
Figure 10 shows an aggregated summary of the simulation that details the overall measured indices, while Figure 11 shows another view of the dashboard reporting an extended version of the measured index, aligned with the simulation timeline. The two figures present different aspects of the same dashboard. Specifically, the summary in Figure 10 provides a comprehensive overview of the session, including session information, statistics, the awareness and safety indices, and two maps showing the gazed areas from the driver’s perspective—one from a top view and the other from a front view. These maps helps the evaluator understand which are the assets in the scene which have been mostly watched by the driver during the session. Figure 11, on the other hand, presents a detailed breakdown of the session. The session information and video stream are shown at the top, while the timeline at the bottom illustrates session-specific and user-specific data. This includes driver–cockpit interactions as well as physiological signals throughout the session. Additionally, a task-wise summary is displayed in the top-right corner, providing a snapshot of the driver’s behavior during the assigned tasks.

4. Discussion

In this section, we iterate one last time over the three selected use cases presented above and discuss the results we obtained.

4.1. Creation of a New Design

Thanks to the shift from traditional physical prototyping [3,5] to a VR-based approach, the V-Cockpit platform significantly helps accelerate and make more efficient the initial phases of the design of a new car cockpit. One of the most notable outcomes of our approach is the reduction in both the time and cost associated with the design process, as traditional methods often involve extensive effort for creating and refining physical prototypes, which can be prohibitively expensive. The integration of V-Cockpit-specific plugins directly into third-party CAD/3D editing software used by designers and engineers provides a more natural, user-friendly migration of content to virtual prototyping. These plugins improve the efficiency of this procedure by automatizing most of the time-consuming and error-prone tasks, such as adapting 3D assets for usage within a real-time context and keeping track of the different files, versioning, and projects they are associated with through the cloud backend.
In a first round with potential early adopters, the feedback was overwhelmingly positive regarding the usability and effectiveness of the V-Cockpit platform. Designers and engineers reported that the VR environment provided a more intuitive and engaging way to interact with prototypes compared to traditional methods. The ability to visualize and manipulate the design in a 3D space enhanced their understanding and enabled more informed decision-making. The VR-based approach also allowed rapid iteration and immediate feedback between users in the virtual environment and operators at the CAD/3D editing level.
Despite these advantages, an extensive platform such as V-Cockpit requires specialized skills to develop and maintain its many core components. The complexity of maintaining the different plugins to be up to date, integrating real hardware, and ensuring seamless operation within the VR system require expertise in both software development and hardware engineering. This requirement can necessitate the additional training or hiring of specialized personnel.

4.2. Testing and Validation of a New Design

As emphasized in the literature, the immersive nature of VR encourages more creative design solutions in the automotive context [16,17]. Particularly in the V-Cockpit scenario, VR enables users to experiment with different configurations and immediately view the results. The possibility to breathe life into a new cockpit design within hours (if not even minutes) from its initial conception is a strong argument in favor of our platform. Thanks to different simulation scenarios, it is possible to identify possible flaws and inconsistencies early on with the current design. For example, it is possible to discover that some specific interactions with the IVI system are particularly distracting, diverting the user attention from driving to interacting with the car equipment for too long, or that the visual look and feel adopted by the software graphical user interface screeches against the colors and materials surrounding the many IVI displays.
User feedback from designers and engineers highlights the superior experience provided by the VR-based approach. The immersive nature of VR and the possibility to interact with a living system allows for a more intuitive and engaging interaction with prototypes, surpassing the limitations of 2D drawings and static physical models.
The V-Cockpit platform’s ability to run IVI system software on its native platform within the VR environment ensures compatibility with embedded hardware constraints. This feature is crucial for accurately simulating the final product’s performance and ensuring that all design elements function as intended. The flexibility of the system to adapt to different car models and manufacturer-specific requirements enhances its applicability across the automotive industry, which is particularly valuable in an industry characterized by rapid technological advancements and mutable consumer demands.

4.3. Data Collection and Analysis of a New Design

The integration of sensors and physiological monitoring devices into the V-Cockpit system provides an innovative approach to assessing user experience and safety. Traditional methods often rely on subjective feedback and limited testing, which can miss subtle but critical issues [6,18,19]. The real-time physiological feedback captured in the VR environment allows for the early detection and correction of potential problems.
The early detection of issues ensures that user comfort and safety are prioritized. This proactive approach can prevent costly redesigns and modifications later in the development process, enhancing the overall quality and usability of the final product. The ability to monitor user involvement, focus, and distraction provides an additional layer of insight, ensuring that the infotainment system meets high standards of user satisfaction and safety.
The collaborative potential of the VR environment is another significant advantage. Multiple stakeholders can interact with the virtual prototype, providing real-time feedback and suggestions. This collaborative approach not only improves the quality of the design but also streamlines the review and approval process, further accelerating the development timeline.

5. Conclusions

In this technical note, we present the V-Cockpit platform as an innovative digital solution for designing, testing, and validating car infotainment systems in a virtual reality environment. Our work highlights the integration of real and simulated hardware and behavioral analysis algorithms to improve user experience and detect potential design glitches early-on.
Traditional methods of designing and validating infotainment systems are often time-consuming and costly, with physical prototypes requiring substantial resources for each iteration. The VR-based approach addresses these challenges by providing a more efficient, cost-effective, and flexible solution.
The ability to rapidly iterate and receive immediate feedback within the VR environment accelerates the overall product development timeline, allowing manufacturers to bring new systems to market more quickly. This acceleration is particularly valuable in an industry characterized by rapid technological advancements and shifting consumer demands. As the technology continues to evolve, it is expected that VR-based approaches will become increasingly integral to the design and validation process. The V-Cockpit platform serves as an example of how VR can be harnessed to create more efficient, cost-effective, and user-friendly automotive systems. Future advancements and broader adoption of this technology will further enhance its impact, driving continued innovation and improvement in the industry.
The success of the V-Cockpit project opens several avenues for future research and development. One potential area of exploration is the integration of artificial intelligence (AI) and machine learning (ML) algorithms to further enhance the design and validation process. AI and ML could be used to predict user behavior and preferences, optimize design elements, and provide more sophisticated ergonomic assessments.
Additionally, expanding the scope of physiological monitoring to include more parameters, such as respiration and electro-dermal activity, and cognitive load assessment, could provide even deeper insights into user interactions. These advancements would allow for a more comprehensive evaluation of user experience and further enhance the design process.
Finally, exploring the application of the V-Cockpit platform in other areas of automotive design, such as autonomous vehicle interfaces and advanced driver-assistance systems (ADAS), could extend its benefits beyond infotainment systems. The principles and technologies developed for the V-Cockpit project have the potential to revolutionize various other aspects of automotive design and validation.

Author Contributions

Conceptualization, M.P., A.P. and T.L.; writing—original draft preparation, M.P. and A.P.; writing—review and editing, D.F., N.L.P., M.G., D.A. and T.L.; supervision, T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Innosuisse—Schweizerische Agentur für Innovationsförderung (50807.1 IP-ICT).

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks go to Sergio Ghirardelli and Roberta Martusciello from Connecta Automotive Solutions Sagl (industrial partner of the described work) for their fundamental support in providing all necessary information about automotive infotainment market requirements and need, and for supporting in the management of the work alignment with the market standards. We appreciate all researchers from the Department of Innovative Technologies (DTI) of the University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Institute of Information Systems and Networking (ISIN) and Institute of Systems and Applied Electronics (ISEA), who contributed to the development of this project. The Honda E car model shown in the figures has been authored by Alex Murias and is licensed under Creative Commons BY 4.0 (accessed on 5 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CADComputer-Aided Design
CANController Area Network
FPDFlat Panel Display
HDMIHigh-Definition Multimedia Interface
HRHeart Rate
HRVHeart Rate Variability
IBIInter-beat Interval
IVIIn-Vehicle Infotainment
PPGphoto-plethysmographic 366 signal
SDKSource Development Kit
VRVirtual Reality
XReXtended Reality

References

  1. Drabek, C.; Paulic, A.; Weiss, G. Reducing the Verification Effort for Interfaces of Automotive Infotainment Software. In Proceedings of the SAE 2015 World Congress & Exhibition, Detroit, MI, USA, 21–23 April 2015. [Google Scholar] [CrossRef]
  2. Galarza, M.A.; Bayona, T.; Paradells, J. Integration of an Adaptive Infotainment System in a Vehicle and Validation in Real Driving Scenarios. Int. J. Veh. Technol. 2017, 2017, 4531780. [Google Scholar] [CrossRef]
  3. Huang, Y.; Mouzakitis, A.; McMurran, R.; Dhadyalla, G.; Jones, R.P. Design validation testing of vehicle instrument cluster using machine vision and hardware-in-the-loop. In Proceedings of the 2008 IEEE International Conference on Vehicular Electronics and Safety, Columbus, OH, USA, 22–24 September 2008; pp. 265–270. [Google Scholar] [CrossRef]
  4. Singh, M.; Srivastava, R.; Fuenmayor, E.; Kuts, V.; Qiao, Y.; Murray, N.; Devine, D. Applications of Digital Twin across Industries: A Review. Appl. Sci. 2022, 12, 5727. [Google Scholar] [CrossRef]
  5. Alarcón, J.; Balcázar, I.; Collazos, C.A.; Luna, H.; Moreira, F. User Interface Design Patterns for Infotainment Systems Based on Driver Distraction: A Colombian Case Study. Sustainability 2022, 14, 8186. [Google Scholar] [CrossRef]
  6. Sen, G.; Sener, B. Design for Luxury Front-Seat Passenger Infotainment Systems with Experience Prototyping through VR. Int. J. Hum. Comput. Interact. 2020, 36, 1714–1733. [Google Scholar] [CrossRef]
  7. Zhou, S.; Lan, R.; Sun, X.; Bai, J.; Zhang, Y.; Jiang, X. Emotional Design for In-Vehicle Infotainment Systems: An Exploratory Co-design Study. In Proceedings of the HCI in Mobility, Transport, and Automotive Systems, Virtual Event, 26 June–1 July 2022; Krömker, H., Ed.; Springer: Cham, Switzerland, 2022; pp. 326–336. [Google Scholar]
  8. Zheng, Z.; Li, T.; Li, B.; Chai, X.; Song, W.; Chen, N.; Zhou, Y.; Lin, Y.; Li, R. Industrial Metaverse: Connotation, Features, Technologies, Applications and Challenges. In Proceedings of the Methods and Applications for Modeling and Simulation of Complex Systems, 21st Asia Simulation Conference, AsiaSim 2022, Changsha, China, 9–11 December 2022; Fan, W., Zhang, L., Li, N., Song, X., Eds.; Springer: Singapore, 2022; pp. 239–263. [Google Scholar]
  9. Gonsher, I.; Rapoport, D.; Marbach, A.; Kurniawan, D.; Eiseman, S.; Zhang, E.; Qu, A.; Abela, M.; Li, X.; Sheth, A.M.; et al. Designing the Metaverse: A Study of Design Research and Creative Practice from Speculative Fictions to Functioning Prototypes. In Proceedings of the Future Technologies Conference (FTC) 2022, Vancouver, BC, Canada, 20–21 October 2022; Arai, K., Ed.; Springer: Cham, Switzerland, 2023; Volume 2, pp. 561–573. [Google Scholar]
  10. Wang, H.; Ning, H.; Lin, Y.; Wang, W.; Dhelim, S.; Farha, F.; Ding, J.; Daneshmand, M. A Survey on the Metaverse: The State-of-the-Art, Technologies, Applications, and Challenges. IEEE Internet Things J. 2023, 10, 14671–14688. [Google Scholar] [CrossRef]
  11. Anwar, M.S.; Choi, A.; Ahmad, S.; Aurangzeb, K.; Laghari, A.A.; Gadekallu, T.R.; Hines, A. A Moving Metaverse: QoE challenges and standards requirements for immersive media consumption in autonomous vehicles. Appl. Soft Comput. 2024, 159, 111577. [Google Scholar] [CrossRef]
  12. Manuri, F.; Gravina, N.; Sanna, A.; Brizzi, P. Prototyping industrial workstation in the Metaverse: A Low Cost Automation assembly use case. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26–28 October 2022; pp. 133–138. [Google Scholar] [CrossRef]
  13. Liu, R.; Peng, C.; Zhang, Y.; Husarek, H.; Yu, Q. A survey of immersive technologies and applications for industrial product development. Comput. Graph. 2021, 100, 137–151. [Google Scholar] [CrossRef]
  14. Charissis, V.; Falah, J.; Lagoo, R.; Alfalah, S.F.M.; Khan, S.; Wang, S.; Altarteer, S.; Larbi, K.B.; Drikakis, D. Employing Emerging Technologies to Develop and Evaluate In-Vehicle Intelligent Systems for Driver Support: Infotainment AR HUD Case Study. Appl. Sci. 2021, 11, 1397. [Google Scholar] [CrossRef]
  15. Bolder, A.; Grünvogel, S.M.; Angelescu, E. Comparison of the usability of a car infotainment system in a mixed reality environment and in a real car. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, Tokyo, Japan, 28 November–1 December 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  16. Schwarz, S.; Regal, G.; Kempf, M.; Schatz, R. Learning Success in Immersive Virtual Reality Training Environments: Practical Evidence from Automotive Assembly. In Proceedings of the 11th Nordic Conference on Human–Computer Interaction: Shaping Experiences, Shaping Society, Tallinn, Estonia, 25–29 October 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  17. Khastgir, S.; Birrell, S.; Dhadyalla, G.; Jennings, P. Development of a Drive-in Driver-in-the-Loop Fully Immersive Driving Simulator for Virtual Validation of Automotive Systems. In Proceedings of the 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), Glasgow, UK, 11–14 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
  18. Wang, B.; Xue, Q.; Yang, X.; Wan, X.; Wang, Y.; Qian, C. Driving Distraction Evaluation Model of In-Vehicle Infotainment Systems Based on Driving Performance and Visual Characteristics. Transp. Res. Rec. 2024, 2678, 1088–1103. [Google Scholar] [CrossRef]
  19. Iaquinandi, M.; Fontana, C.; Fiorillo, I.; Naddeo, A.; Cappetti, N. Performance Evaluation of an Immersive Measurement Instrument for Automotive Field Applications. In Proceedings of the Advances on Mechanics, Design Engineering and Manufacturing IV, Ischia, Italy, 1–3 June 2022; Gerbino, S., Lanzotti, A., Martorelli, M., Mirálbes Buil, R., Rizzi, C., Roucoules, L., Eds.; Springer: Cham, Switzerland, 2023; pp. 1426–1435. [Google Scholar]
  20. Osio, G.; Ángel, M. Proposal of an Adaptive Infotainment System Depending on Driving Scenario Complexity. Ph.D. Thesis, UPC—Departament d’Enginyeria Telematica, Barcelona, Spain, 2020. [Google Scholar] [CrossRef]
  21. Mousavi, S.M.H.; Besenzoni, M.; Andreoletti, D.; Peternier, A.; Giordano, S. The Magic XRoom: A Flexible VR Platform for Controlled Emotion Elicitation and Recognition. In Proceedings of the 25th International Conference on Mobile Human–Computer Interaction, Athens Greece, 26–29 September 2023; Association for Computing Machinery: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  22. Gabajová, G.; Krajčovič, M.; Matys, M.; Furmannová, B.; Burganová, N. Designing virtual workplace using unity 3D game engine. Acta Tecnol. 2021, 7, 35–39. [Google Scholar] [CrossRef]
  23. Andreoletti, D.; Luceri, L.; Peternier, A.; Leidi, T.; Giordano, S. A Framework for Emotion-Driven Product Design Through Virtual Reality. In Proceedings of the Information Technology for Management: Business and Social Issues, Virtual Event, 2–5 September 2021; Springer: Cham, Switzerland, 2022; pp. 42–61. [Google Scholar] [CrossRef]
  24. Luque-Casado, A.; Zabala, M.; Morales, E.; Mateo-March, M.; Sanabria, D. Cognitive performance and heart rate variability: The influence of fitness level. PLoS ONE 2013, 8, e56935. [Google Scholar] [CrossRef] [PubMed]
  25. Salvucci, D.D.; Goldberg, J.H. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, Palm Beach Gardens, FL, USA, 6–8 November 2000; pp. 71–78. [Google Scholar]
  26. EuroNCAP. European New Car Assessment Programme—Assessment Protocol—Safety Assist Safe Driving. 2023. Available online: https://cdn.euroncap.com/media/77301/euro-ncap-assessment-protocol-sa-safe-driving-v102.pdf (accessed on 29 July 2024).
Figure 1. V-Cockpit system architecture. Numbers indicate the core components that implement specific functionalities, while unnumbered boxes represent functionalities provided by common external tools (such as databases and Android emulators).
Figure 1. V-Cockpit system architecture. Numbers indicate the core components that implement specific functionalities, while unnumbered boxes represent functionalities provided by common external tools (such as databases and Android emulators).
Applsci 14 08160 g001
Figure 2. System components communication.
Figure 2. System components communication.
Applsci 14 08160 g002
Figure 3. On the left: the cockpit ready for export from the 3D editing software (Blender 3.5.0) directly into the cloud backend through our V-Cockpit custom plugin. On the right: the same asset loaded and visualized in real-time within the VR client.
Figure 3. On the left: the cockpit ready for export from the 3D editing software (Blender 3.5.0) directly into the cloud backend through our V-Cockpit custom plugin. On the right: the same asset loaded and visualized in real-time within the VR client.
Applsci 14 08160 g003
Figure 4. VR users can customize the different cockpit elements by clicking on them and changing their properties through a tablet-like interface. On the left: screen adjustments such as size and aspect ratio. On the right: material and color customization.
Figure 4. VR users can customize the different cockpit elements by clicking on them and changing their properties through a tablet-like interface. On the left: screen adjustments such as size and aspect ratio. On the right: material and color customization.
Applsci 14 08160 g004
Figure 5. On the left: VR users can provide immediate feedback by attaching sticky notes (as numbered bullets) to specific elements of the cockpit. On the right: 3D designers and engineers can review notes directly from within their editing software.
Figure 5. On the left: VR users can provide immediate feedback by attaching sticky notes (as numbered bullets) to specific elements of the cockpit. On the right: 3D designers and engineers can review notes directly from within their editing software.
Applsci 14 08160 g005
Figure 6. On the left: static simulation scenario, with the car parked and surrounded by a panoramic image also used for lighting. On the right: dynamic simulation, with the user asked to perform specific operations while driving.
Figure 6. On the left: static simulation scenario, with the car parked and surrounded by a panoramic image also used for lighting. On the right: dynamic simulation, with the user asked to perform specific operations while driving.
Applsci 14 08160 g006
Figure 7. Bridgeboard to connect the VR experience with the IVI software natively executed on the real hardware.
Figure 7. Bridgeboard to connect the VR experience with the IVI software natively executed on the real hardware.
Applsci 14 08160 g007
Figure 8. V-Cockpit analytics: metrics calculated on the PPG and ECG signal, gaze data, driver actions stream, and simulation data to infer the safety and awareness metrics.
Figure 8. V-Cockpit analytics: metrics calculated on the PPG and ECG signal, gaze data, driver actions stream, and simulation data to infer the safety and awareness metrics.
Applsci 14 08160 g008
Figure 9. Example of distraction signal calculated over a scenario simulation.
Figure 9. Example of distraction signal calculated over a scenario simulation.
Applsci 14 08160 g009
Figure 10. V-Cockpit web dashboard: analytics session aggregated overview.
Figure 10. V-Cockpit web dashboard: analytics session aggregated overview.
Applsci 14 08160 g010
Figure 11. V-Cockpit dashboard: simulation timeline details associated to the calculated analytical metrics signals.
Figure 11. V-Cockpit dashboard: simulation timeline details associated to the calculated analytical metrics signals.
Applsci 14 08160 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papandrea, M.; Peternier, A.; Frei, D.; La Porta, N.; Gelsomini, M.; Allegri, D.; Leidi, T. V-Cockpit: A Platform for the Design, Testing, and Validation of Car Infotainment Systems through Virtual Reality. Appl. Sci. 2024, 14, 8160. https://doi.org/10.3390/app14188160

AMA Style

Papandrea M, Peternier A, Frei D, La Porta N, Gelsomini M, Allegri D, Leidi T. V-Cockpit: A Platform for the Design, Testing, and Validation of Car Infotainment Systems through Virtual Reality. Applied Sciences. 2024; 14(18):8160. https://doi.org/10.3390/app14188160

Chicago/Turabian Style

Papandrea, Michela, Achille Peternier, Diego Frei, Nicolò La Porta, Mirko Gelsomini, Daniele Allegri, and Tiziano Leidi. 2024. "V-Cockpit: A Platform for the Design, Testing, and Validation of Car Infotainment Systems through Virtual Reality" Applied Sciences 14, no. 18: 8160. https://doi.org/10.3390/app14188160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop