4.1. Cybersecurity’s Impact on the Project Development Process
Establishing the area of cybersecurity in the product development process for electric vehicles introduces completely new challenges. Due to its dynamic nature (new threats appearing throughout the entire product life cycle), the standard product development process must be extended. This goes beyond the production ramp-up and it reaches the post-production and decommissioning areas. These processes are paramount for product cybersecurity in order to establish the mechanisms of a cybersecurity incident response and introduce software updates once vulnerabilities have been detected. Moreover, once a product is withdrawn from the market or cybersecurity support ends, established procedures are applied to run these processes safely and responsibly. The changes in the project development process are presented in
Figure 5.
However, cybersecurity also has an impact on other areas. It goes hand in hand with FS processes, which are also further extended to autonomous systems.
Table 3 includes mapping between cybersecurity and product development processes for electric vehicles, and
Table 4 includes mapping between cybersecurity and supporting processes. It also shows which cybersecurity processes have an impact on functional/autonomous system safety processes.
Appropriate planning is an essential component of a successful project. In this area, project leaders prepare timelines for all engineering disciplines based on state-of-the-art knowledge, lessons learned, and internal company standards. Another layer of abstraction is added to this part by cybersecurity. New areas that were not previously considered must be incorporated into the overall project planning. As a result, a new managerial position, that of the Cybersecurity Manager, is required in order to handle all required objectives and ensure the cybersecurity process’s correctness throughout the entire life cycle of the designed system. The Cybersecurity Manager is, e.g., responsible for working with the penetration assessment team on the scheduling and execution of penetration assessments. During the execution of tests, the vulnerabilities of system safety/autonomous features should be verified. The evaluation should, therefore, be planned in coordination with a Functional Safety Manager in order to coordinate a mitigation plan for the safety/autonomous feature vulnerabilities.
Key management also creates a new complexity layer. In order to handle security artefacts, the entire IT infrastructure must be established. This includes, for instance, a Public Key Infrastructure that is used for the creation of digital certificates and the management of public key encryption. There must be procedures for the distribution of key materials to vehicles during production and maintenance. Any vulnerabilities found in these fields could lead to safety losses if, for instance, an attacker compromises the binary in safety-critical/autonomous ECUs.
The cybersecurity areas mentioned in the third column of
Table 3 have an impact on functional/autonomous system safety processes because they necessitate the additional consideration of safety-critical systems in order to cover all system use cases.
The foundation of a system design is known as the concept development. For cybersecurity, an extensive analysis of potential threats to defined cybersecurity items, as well as risk assessment and mitigation plans, is required. This concludes with a definition of the cybersecurity concept. This section encapsulates the essence of holistic engineering. All possible hazard scenarios due to unreasonable risk or autonomous use cases should be considered in order to obtain a complete security concept. As a result, all of these areas have an impact on the functional/autonomous system safety processes.
To complete the design activities, a system design is needed, which consists of the system boundaries, actors, and use cases. Only after all safety, security, and autonomous functions have been considered can the overall system architecture be created.
According to the V-model, the detail design activities come next. They include the definition of the developed solution’s hardware (HW) and software (SW) architectures. To fit the cybersecurity concept, appropriate hardware measures must first be considered, for example, the microcontroller selection, choosing secure hardware elements (hardware security module—HSM; trusted platform module—TPM), obscuring electronic paths, or physically preventing the product’s cover from being opened. From a software standpoint, the selection of secure libraries, memory and memory process isolation, and secure coding must be ensured. To complete the analysis, all of these activities must be considered for the sake of safety and autonomy. All potential vulnerabilities must be investigated from both a cybersecurity and a safety standpoint, especially in vehicles with a high level of autonomy. The selection of appropriate electronic components has a direct impact on both safety and cybersecurity.
The final step is to double-check the solution that has been implemented. Cybersecurity requirements must be validated at all levels, including software testing, integration testing, and system testing. This includes both hardware and software testing. In this case, all cybersecurity testing and refinement must be performed concurrently with safety and autonomous functions. These cannot be treated in isolation because, if a specific cybersecurity function (e.g., security of a safety-critical signal) that is supposed to protect the safety function (e.g., emergency braking) is not implemented correctly, this has an impact on the end user’s safety.
Finally, the product must be manufactured. Cybersecurity is also involved when it comes to generating/providing security artefacts (such as symmetric/asymmetric keys, certificates, etc.) in the ECU, as well as exchanging confidential information between OEMs and Tier 1 suppliers. If an ASIL product is considered, extensive testing during the manufacturing process is required in order to ensure that each produced unit is correctly assembled. The challenge is to carry out security functions while ensuring that they do not interfere with the final product’s functionality.
Post-production and decommissioning are two additional product development processes that were not previously extensively considered. If a severe malfunction is detected after production, a software update is required. This can be performed in a workshop or, if possible, over the air (OTA). This poses a new threat to vehicles, necessitating the implementation of proper cybersecurity measures, such as digital software signing. Furthermore, adequate safety measures must be implemented to avoid potentially hazardous situations during and after software updates. An extensive rollback scenario must be considered for a safety-critical autonomous system.
Starting with the managers and moving on to the IT system, information security, and, finally, external audits, the functional/autonomous system safety process is also impacted because these components must work in a secure environment, which is provided by cybersecurity measures.
By analyzing the impact of cybersecurity on electric vehicle product development and the supporting processes in
Table 3 and
Table 4, we can see that only two of the 40 security areas mentioned—cybersecurity responsivities and cybersecurity cases—do not have a direct impact on the safety process for autonomous systems.
Furthermore, cybersecurity affects areas such as post-production and decommissioning, which have previously received little attention. As a result, collaboration between cybersecurity and other areas must be established at each process level. This is especially critical during the concept phase, in which decisions that affect the entire product development strategy are made.
Risk assessment is a key aspect of the development of new products therefore, we analyzed risk assessments more closely. However, the processes of cybersecurity and functional/autonomous system safety approach this activity from different perspectives.
With cybersecurity, the process of TARA is performed once the items of cybersecurity are defined. According to ISO 21434 [
4], the process consists of an asset definition, threat scenario identification, impact rating, attack path analysis, attack feasibility rating, risk value determination, and risk treatment decision. The impact rating is evaluated for each identified asset; it is used to assess if a particular threat scenario can lead to financial, confidential, operational, or safety damages. Afterwards, the analysis goes through the next steps, which lead to the assignment of a cybersecurity assurance level (CAL), the definition of the cybersecurity goals, and to the definition of the security requirements.
Similarly, functional/autonomous system safety processes start with the definition of an item; then, a hazard analysis and risk assessment are performed, which lead to the assignment of an automotive safety integrity level (ASIL). After this activity, the safety goals are defined and the safety requirements are prepared. Annex F of ISO 21434 [
4] offers guidance on how damage to safety can be rated. However, the given example does not cover multiple road users in a single damage scenario. This provides an area that can be improved for better ratings of damage that impacts more road users. Based on the guidelines, as an example, we propose a more detailed rating system, which is presented in
Table 5. These ratings should be specific to organizations and systems. They differentiate between a single road user and multiple road users who are affected by damage to safety due to a potential cybersecurity threat. The values assigned to safety damage can be adjusted depending on the organization’s specific approach.
This results in a connection between TARA and HARA, which is shown in
Figure 6. Once an impact rating is defined in TARA for safety damage in a threat scenario, the hazard identification during the HARA analysis must be refined. The hazard taken from the possible system vulnerability must be considered (Green Arrow 2). However, this is not a one-way connection. For safety-critical/autonomous systems, the safety cannot be guaranteed without cybersecurity measures. Therefore, all identified hazards must be considered during threat scenario identification (Blue Arrow 1). In
Figure 6 we do not describe HARA in detail for our analysis because we would like to concentrate on TARA due to its dependency on particular ECU use cases. The entire process results in a combination of cybersecurity and safety goals after incorporating the impacts of TARA and HARA. In consequence, the FS/autonomous and cybersecurity requirements better reflect the system’s needs.
For a V2G gateway, an example of the cooperation between TARA and HARA is an inlet temperature sensor, which monitors a vehicle’s inlet temperature during charging. The temperature sensor is a safety-critical item because a malfunction in this sensor may lead to the vehicle catching fire. Therefore, FS measures are taken to protect the user when a defect in occurs in the sensor (e.g., a signal plausibility check, sensor redundancy, end-to-end protection, or safe state). However, this is not enough. Information from the temperature sensor is sent to a battery management system. The ECU must be protected by cybersecurity measures against manipulation (e.g., spoofing, tampering, etc.). Any vulnerabilities found in this case can cause the same risks as those in the FS case. This type of analysis must be carried out at an early stage of product development. Any deficiencies found later in development can lead to significant architectural modifications, which can not only include software, but also hardware changes. The time and development costs thus increase.
In addition, if any vulnerabilities are identified in the field, the consequences may be even greater if the entire fleet is updated. The ADAS system of a U.S. OEM was, as described in [
44], not prepared for phantom attacks (where fake objects are considered as real). While the OEM refused to accept the error, the software responsible for data identification was deleted shortly after publication, and this actually required additional costs and time for a redesign and reimplementation.
We were able to gather information regarding how cybersecurity activities were planned for the development of a single V2G (vehicle-to-grid) gateway ECU for a premium German OEM after interviewing the planning teams of one of the leading Tier 1 automotive suppliers. The specifications for the ECU included approximately 8000 requirements, in addition to more than 1000 requirements related to the cybersecurity component. The project is still in the development phase. The V2G ECU will be mounted in an electric vehicle. For the needs of this paper, we examined only the design process.
For the project analyzed here, the initial planning assumed that the cybersecurity design activities would be taken over by the systems engineering (SE) department (a total of four or five systems engineers, including one required engineer, were involved in the project). The effort was estimated to require half of the systems engineering resources until the pre-production phase, which, in total, would last six quarters, i.e., quarters 1, 2, 3, 4 of the first year of the product’s development and quarters 1 and 2 of the second year of the product’s development. Moreover, the support of a fraction of 0.2 of the systems engineering resources was planned for the next three quarters until the start of production. No estimates were made for the maintenance or decommissioning phases. The overall effort needed for the cybersecurity design was calculated as 0.02% of the estimate of the effort required for the entire project. The initial project resource estimates are presented in
Table 6, where a system engineering effort is presented as man-effort.
After only one year of development, the hours reported for the cybersecurity activities reached 4% of the hours of the overall project design activities. Moreover, in total, two resources were involved in the cybersecurity design—one from the systems engineering team and the other from the software team (SW). The effort reported by the systems engineering team reached 1.5% of the entire project effort, and from the software engineering side, it reached 2.5%. The activities are not yet finished (the project has advanced to approximately 70% completion). The current assumption is that one more resource should be added for each competency, i.e., SYS and SW, due to the new regulations and customer requirements.
Table 7 presents the re-estimated engineering effort for the cybersecurity design, and it includes the forecasted effort for the growth of features. Just as in
Table 6 and
Table 7 systems engineering effort is presented as man-effort.