1. Introduction
In recent years, the advancement of Intelligent Transport Systems (ITS) has led to significant research in the field of Vehicular Ad hoc Networks (VANETs). The U.S. Department of Transportation (U.S. DOT) highlights the need to ensure that connected vehicles operate in secure, safe, and privacy-protective networks [
1]. A security system is required as vehicles exchange critical information with other vehicles, wireless mobile devices, traffic management centers, and roadway infrastructure. VANETs, as shown in
Figure 1, play a vital role in enabling seamless real-time information exchange between vehicles to improve safety, optimize traffic flow, and enhance the efficiency of the overall driving experience.
However, the reliability of information sharing between vehicles becomes increasingly challenged in disconnected areas characterized by sparse infrastructure and where connectivity is intermittent. In these scenarios, the probability of malicious actions occurring without detection is increased, which leads to hazardous and undesirable situations. Thus, detecting and reporting misbehavior in such conditions is crucial and requires an accurate scheme that works offline without relying on an infrastructure. Vehicles need a reliable way to autonomously assess the trustworthiness of information based on factors like reputation scores.
A further difficulty is differentiating between genuine system errors, such as those resulting from imperfect GPS data, and intentional misbehavior, a task that necessitates highly precise validation processes. Given the real-time requirements of vehicular networks, balancing data accuracy and computational efficiency poses a dilemma. In addition, the emphasis on privacy protection using certificates to maintain anonymity complicates the process of misbehavior detection.
Existing standards for misbehaving detection and reporting rely on the Misbehavior Authority (MA) in the Security Credential Management System (SCMS) [
3]. In this system, if enough misbehavior is reported for a certain vehicle, the vehicle certificates will be revoked and added to the Certification Revocation List (CRL). This will be updated and distributed to other vehicles in the environment. Once other vehicles identify that a message came from a vehicle on the CRL, it will no longer be considered a trusted node for sending and receiving messages [
4]. However, in the implementation of SCMS in disconnected vehicular networks, two primary challenges arise, as depicted in
Figure 2.
Firstly, maintaining and synchronizing the CRL is crucial for identifying the misbehaving vehicles. The CRL must be constantly updated and shared with all vehicles, a process that requires regular access to network infrastructure, typically via Roadside Units (RSUs). This becomes problematic in areas with limited connectivity as the CRL grows with the number of misbehaving vehicles, necessitating frequent online updates.
CRL size and retrospective unlinkability: The SCMS is responsible for revoking misbehaving or malfunctioning vehicles’ Pseudonym Certificates (PCs). However, putting all valid vehicle certificates on the CRL would make it very large. The system needs an efficient scheme to perform the revocation without revealing the PCs used by the vehicle before it started misbehaving.
As a result, vehicles could be using expired or invalid certificates that might call into question the security, trustworthiness, and accuracy of the misbehavior reports they transmit. To address these challenges, we need a system that ensures vehicles detect and report misbehavior accurately and maintain privacy while establishing trustworthiness, even without constant internet connectivity.
Reputation systems have been proposed as a means to strike a balance between anonymity, security, and trust, especially in offline settings. In this paper, we propose a Distributed Reputation mechanism for Accurate vehicle Misbehavior Reporting (DRAMBR). This mechanism accurately classifies misbehaviors and distinguishes between honest, erroneous, and malicious vehicles, ensuring timely and reliable reputation updates. We introduce a new entity, the Reputation Server (RS), which provides the Reputation Value (RV). The RS will be linked to the SCMS. Reputation considers the behaving subject through accrued observations and interactions; therefore, providing a vehicle with an indication of the likelihood that its peer is well-behaving or malicious through an RV. DRAMBR becomes particularly relevant in contexts where direct authority oversight is limited or when vehicles cannot operate in a fully connected manner.
DRAMBR improves the accuracy of vehicle misbehavior reporting using two processes of assessment:
Offline Misbehavior Detection: During offline communication, a vehicle detects misbehavior, collects observations from neighbors, and generates an Misbehavior Report (MR).
Online RS Processing: Upon reconnection, the MR is sent to the RS, which performs validation steps, aggregates data, and takes appropriate action on the reporter and target vehicles.
DRAMBR processes and evaluates MRs through a multi-stage aggregation process integrating advanced classification techniques such as DBSCAN, Isolation Forest, and Gaussian Mixture Model. We analyze vehicle communications under different conditions in disconnected environments using Simulation of Urban MObility (SUMO). DRAMBR’s accuracy performance is then evaluated using Random Forest and XGBoost, suggesting that it provides an accurate reputation management approach under challenging conditions.
The remainder of this paper is organized into seven further sections. Basic assumptions and background about the technologies and standards adopted in our system are introduced in
Section 2. Related work is discussed in
Section 3 by presenting previous work related to misbehaving detection and reputation schemes in vehicular contexts. The system model is introduced in detail in
Section 4, leading to
Section 5, which explains the technical implementations of the proposed DRAMBR.
Section 6 then outlines the experiment description and the simulated setup, and
Section 7 discusses the results and shows the evaluation of our scheme. Finally, we draw conclusions in
Section 8.
3. Related Work
This section analyzes the use of reputation systems in misbehavior reports and feedback mechanisms conducted in recent studies to maintain network trustworthiness and security in both connected and disconnected areas. In recent years, there has been significant research interest in using reputation systems to minimize malicious behaviors for trustworthy V2V communications. To illustrate, reputation-based malicious vehicle detection systems have been devised and are gaining popularity to effectively deal with the threat of malicious vehicles in vehicular networks. The source vehicle determines whether a vehicle is dangerous based on its reputation score and then finds a trusted communication path. A key point to remember is that V2V reputation research is categorized into centralized and decentralized models.
The centralized approach, pioneered by [
18], revolves around a scheme that centrally distributes, updates, and stores vehicles’ reputation scores. That study introduces a reputation announcement scheme for VANETs using Time Threshold to assess message reliability. The researchers in [
19] recently proposed a centralized system for highways and urban roads, relying on a central Trusted Authority to calculate feedback scores from various vehicles and update the target’s reputation. Moreover, Ref. [
20] proposed an incentive provision method in which the RSU updates the sender’s reputation score based on observed actions validated by vehicles.
In contrast, distributed reputation systems operate without dependence on infrastructure. In this model, vehicles autonomously collect, maintain, and update reputation scores in an ad hoc manner. The authors in [
21] developed a node reputation system to evaluate the reliability of vehicles and their messages. They grouped vehicles with similar mobility patterns that are close to each other into platoons to minimize propagation overhead. In 2021, Kudva et al. [
22] introduced a framework for self-organized vehicles that filters out malicious vehicles based on standard scores. In 2014, Cao et al. presented a multi-hop version that utilized the carry-and-forward method [
23]. The ideas aim to assess the message’s reliability and the aggregation of reputation scores. In contrast, Katiyar et al. proposed a single-hop version that employed a single-hop reputation announcement [
24]. However, because messages and feedback are linked and not anonymous, these schemes do not provide enough privacy protection. As a result, an attacker can carry out a traceability attack and learn the path of a target vehicle. Taking into account the vehicle’s reporting history, these reputation systems improve the accuracy of accident detection.
The work by Jesudoss introduced a dynamic reputation scheme based on events that assigns reputation value according to the vehicle’s behavior [
25]. Reputation values are awarded to each vehicle that sends messages that corroborate with other vehicles, which increases the vehicle’s trust degree with each verified message. However, this scheme is insufficient in mitigating the threat posed by Sybil attacks, where a malicious vehicle might have multiple fake identities to send false messages.
Two-pass validation and a two-phase transaction were included in the blockchain-based traffic event validation that the authors of [
26] suggested. They employed a consensus mechanism referred to as a Proof of Event, which employs two criteria. Vehicles send alerts to the RSU, which only takes them for a set amount of time. The RSU enters the notification phase once the number of alerts exceeds the first threshold. With the assistance of approaching vehicles, it can validate the alert, add it to the local blockchain, and use multi-hop transmissions to notify neighboring vehicles about the incident. Once all RSUs have agreed that the incident is correct and have reviewed the supporting evidence and reports, the RSU notifies the other RSUs in the same zone. The occasion will be appended to the worldwide blockchain, including all local happenings. Vehicles for event verification can access the public global blockchain. Unlike previous consensus methods, Proof of Event uses timestamps to select the block submitter, which reduces power usage. However, based on their proposal, the size of the global blockchain is enormous since it might encompass all the events in a very large geographic area, meaning that a significant number of events are likely to be added to the blockchain every day. Furthermore, their work lacks mutual authentication details. Out of all alert submitters, 40% of them are internal attackers, resulting in a 100% false event success rate.
In 2023, the work by [
9] presented the trust model that employed an ID-based signature, a Hash Message Authentication Code (HMAC), and an RSA-based method to detect malicious and incorporated messages. The authors in [
27] suggested an elaborate reputation approach that considers the message’s reliability and the sending vehicle’s participation. Every vehicle is tracked, and based on the conduct of the watched vehicle, a trust score is assigned. However, these solutions still have several shortcomings and limitations, such as the fact that they are only helpful for limited types of attacks, a high level of computational complexity, and a lack of scalability.
The proximity-based approach prioritizes accident messages from the vehicle closest to the accident location [
28]. Their method improves accident detection accuracy and successfully reduces the influence of conflicting messages. However, rather than taking into account other factors, their approach focuses mostly on using proximity as a factor in dispute resolution. In 2021, Vaiana et al. [
29] introduced a hybrid approach that combines reputation values, proximity analysis, and severity assessment for evaluating accident reports. Their study shows the effectiveness of considering severity in reconciling conflicting messages and improving accident detection accuracy.
In 2023, the work by [
30], proposed a cooperative scheme to detect and prevent false emergency messages, improving V2V reliability with minimal computational overhead. In 2024, Yang et al. [
31] suggested the use of a reputation server as a reputation authority to generate reputation certificates; this approach is referred to as certified reputation, which was first proposed in [
32]. Samara and Alsalihy [
33] proposed a new reputation mechanism to identify malicious vehicles in V2V. This mechanism issues Valid Certificates (VCs) or Invalid Certificates (ICs) status for each vehicle and allows the vehicle to make a decision based on the sender’s certificate status. The authors address the delays, overhead processing, and channel interference associated with CRL. However, this study ignored the use of traditional (SCMS), which is an essential part of V2V communication.
In the context of misbehavior detection in vehicular networks, various machine learning techniques have been researched and explored to improve misbehavior detection and reputation evaluation. DBSCAN, as a density-based clustering algorithm, has been one of the practical approaches in detecting anomalies and setting the ground truth by identifying dense clusters against outliers. The work by [
34] indicated the effectiveness of the DBSCAN in black hole attack detection by integrating it with decision trees, showing its capability to isolate malicious behavior. Similarly, the Isolation Forest has also been an efficient anomaly detection technique; for example, in 2020, Ripan et al. [
35] applied iForest for cyber anomaly classification by leveraging its ability to isolate outliers through recursive partitioning. Further extended this concept with Deep Isolation Forest, incorporating deep learning to refine anomaly detection accuracy, as shown in [
36]. Combining this classification further shares several similarities with DRAMBR. In our proposed system, DBSCAN is implemented to cluster misbehavior reports and set the ground truth based on the reporters’ reputations. At the same time, iForest is further used to identify malicious versus genuine reporters by highlighting outlier patterns.
In addition to anomaly detection, classification models such as GMM, XGBoost, and Random Forest further make reputation evaluation more accurate. The authors in [
37] proposed a GMM-based classification approach that effectively models uncertain and probabilistic distributions—a valuable tool to differentiate misbehavior patterns in DRAMBR. Moreover, the XGBoost classifier has been widely recognized due to its powerful predictive capabilities, as proven by [
38], the author applied XGBoost across multiple datasets for classification and decision-making. In DRAMBR, the XGBoost algorithm is proposed to improve accuracy in misbehavior classification from past reputation data. In [
39], Random Forest is integrated with the XGBoost to offer ensemble learning features that make a decision more robust by reducing the bias and variance in classification to ensure a proper evaluation of a misbehavior report.
In response to ongoing research efforts to develop a reliable and trustworthy V2V communication system that utilizes reputation from neighboring vehicles, this study addresses the challenge of detecting dishonest vehicles within disconnected environments. We present the relevant literature review for centralized and decentralized reputation systems in the proposed DRAMBR framework. Centralized systems, for instance proposed in [
18,
19], lacking scalability and connectivity issues, can implement reliable aggregation with consistent threshold management. In contrast, scalability is better in a decentralized system like [
22,
25,
33] but lags in consistency over trust. Recognizing the limitations imposed by network disconnectivity, where existing standards may be insufficient, the proposed DRAMBR framework, designed in alignment with SCMS standards [
17], addresses these challenges by integrating dynamic reputation updates and advanced clustering techniques to assess the credibility of received messages before submitting the feedback reports to the centralized server, thereby enhancing the accuracy and reliability of the reporting system.
4. System Model
The proposed DRAMBR represents a fully integrated reputation solution designed to manage the reporting process effectively under challenging conditions in V2V communication. The aim is to enhance the trustworthiness of V2V communications by monitoring vehicle platoon behavior, detecting any misbehavior, and reporting it back to a central RS that assigns and periodically updates their RVs to demonstrate their trustworthiness.
The system operates in two primary phases: Offline Evaluation (OE) and Online Reporting (OR): In the OE, vehicles assess each other’s behavior and store MR locally without central connectivity, enabling continuous trust management. In the OR phase, upon reconnecting to the Internet, the vehicles send the accumulated MRs to the RS. The RS aggregates, classifies, and analyzes the MRs, reducing processing overhead.
The system can be implemented in both urban (online) and rural (offline) areas. In urban areas, it reduces computational complexity on the RS by decentralizing the RV, thereby improving scalability and efficiency. In rural areas, vehicles benefit from sharing their RVs without relying on a central authority, ensuring continued reliability even in disconnected scenarios.
In this section, we first outline the main system assumptions. Next, we explain the vehicle behaviors highlighting the threat model relevant in this study. Following that, we introduce the DRAMBR framework. These explanations set the stage for discussing the main phases of DRAMBR.
4.1. DRAMBR Assumptions
To ensure a realistic communication network, our study relies on assumptions about vehicle capabilities, behaviors, and system conditions. These assumptions reflect practical and feasible conditions in real-world vehicular networks. The key assumptions are outlined below:
Pseudonymity: Vehicles interact via pseudonyms and communicate over an anonymous network.
Connectivity setup: In the OE phase, vehicles communicate out of range of network connectivity, hence, the periodic synchronization with the network to obtain the latest CRLs and RVs is limited.
Reputation setup: Each vehicle is initialized with an RV. The RV will increase with positive behavior and decrease with misbehavior.
Independent behavior: Each vehicle can behave either honestly or misbehave. The vehicle’s behavior is independent of others such that the actions of vehicle do not influence the behavior of vehicle for .
Communication range: Vehicles communicate using Dedicated Short-Range Communication (DSRC) technology that has been developed specifically to provide reliable and effective communication in a range of (300–900 m), ensuring only nearby vehicles can interact even without connectivity.
OBUs detection: Each vehicle is equipped with On-Board Units (OBUs) that detect any abnormal activity or irregular behavior.
Messages checking: Each vehicle has a mechanism to check its outgoing messages and detect any misbehavior before they are transmitted.
Limited misbehavior reporting: Not all observed activities of misbehavior result in generating and transmitting the MR. The decision-making process regarding whether a vehicle generates an MR based on observed misbehavior is specific to the vehicle’s implementation.
4.2. Vehicle Behavior in OE and OR
Vehicles can act very differently on a network depending on their intentions and strategy. While an honest vehicle always follows protocols and truthfully reports, malicious ones may exhibit dynamic behavior, for instance, by deliberately causing harm in the system, such as sending misleading information or behaving strategically honestly with the intent of gaining confidence to get undetected, their actions will be challenging to anticipate and control. Generally, trust and reputation-based systems are exposed to two main behaviors listed below:
Honest:
1—In V2V, create, broadcast, or forward correct messages (OE Phase).
2—In V2I, create and submit correct MRs (OR Phase).
Malicious:
1—In V2V, ignore the correct messages or create and broadcast false (OE Phase).
2—In V2I, create and submit false MRs (OR Phase).
The system makes a distinction between intentional and unintentional misbehavior, see
Figure 6, with the latter encompassing all vehicle faults and error scenarios. While we take into account all these considerations, the primary focus is on the accuracy of the MR submission and the behavior of the reporters. Specifically, we analyze the false MR (
) causes and how the reporter’s reliability impacts the overall trustworthiness of the system.
A significant threat scenario arises by an MR Attack (
), when the attacker with high RV manipulates the reporting system by not reporting the misbehavior and generates a
for an honest vehicle, as illustrated in
Figure 7. This type of activity is similar to the badmouthing attack [
40], which might result in assigning a high RV to a vehicle that deserves a lower one and a low RV to one that deserves a higher one. This tactic inflates the reputation of certain vehicles, making them seem more trustworthy while deflating the reputation of others and harming their credibility. Such rating manipulation distorts the accurate feedback, misleads other vehicles, and undermines trust in the platform’s rating system.
The in Algorithm 2 shows the attack activity considered in this study. Assuming is behaving honestly, the attacker targets by generating a false misbehavior report (). The () is generated only if the status of the target vehicle is not already flagged as misbehaving (). However, if the target vehicle is already flagged as misbehaving, no MR is sent, and the attack halts.
To mitigate such an attack, our system follows a thorough process of comparison and aggregation, ensuring a more accurate evaluation as illustrated in the following sections.
Algorithm 2
|
Input: , , Output: or Null
- 1:
if then ▹ Target is honest - 2:
- 3:
Send to - 4:
else - 5:
▹ No MR sent for misbehaving vehicle - 6:
end if
|
4.3. DRAMBR Framework
The DRAMBR framework is shown in
Figure 8. We introduce a new entity, the RS, which provides the RV. The RS is linked to the MA in the SCMS. During the reputation retrieval process, the RS will pre-sign the RV using the
Pre-Signature scheme proposed in [
2]. The RV is then sent to the requested vehicle to complete the signature and attach it to the message with the PC.
All MRs are submitted to the RS, which checks and evaluates all the received MRs. In this context, we set a threshold for accepting the MR as , and we denote the vehicle reporting the misbehavior as () and the target vehicle as (). Our reputation system is designed to consider the MRs if they meet and ignore any MR from with a < .
The system’s efficiency is highlighted by its ability to handle contradictions between the MRs accurately. If most MRs contradict a specific MR, indicating the same misbehavior, the RS does not immediately classify this as malicious reporting. Instead, it evaluates the context to determine whether the discrepancy is due to an error or potential attack activity to make a fair and accurate decision while maintaining system reliability.
To set the stage for the DRAMBR valuation process, we first outline the main entities involved, as shown in
Table 2.
4.4. Workflow and DRAMBR Phases
As shown in
Figure 8, DRAMBR begins with the registration phase, the initial phase. In this phase, the vehicle will be connected to the infrastructure (RSU or RS) to establish a unique credential and upload the PCs as well as retrieve the RV. We divide the overall process into two broad stages, which are OE and OR, each further divided into subphases to present an integrated security and trust management for V2V communication.
4.4.1. Process 1: Offline Evaluation (OE)
This phase occurs offline, where vehicles evaluate the accuracy of road status information within the offline network. Vehicles monitor each other’s behavior and trustworthiness in an ad hoc manner, without relying on trusted authorities to identify potential misbehavior. Before diving into the main steps in the OE process, we first explain the critical events that trigger the local detection and evaluation mechanisms within the system. The trigger event for identifying misbehavior occurs when the () transmits contradicts observable conditions, which is the basic event that drives the system responses. could indicate misbehavior in two possible activities:
Failure Alert Transmission: When detects an incident (e.g., accident) but does not transmit it, nearby observing vehicles () may detect this omission through their OBUs or reports from other vehicles.
False Alert Transmission: When transmits an emergency when no accident or hazard exists, observing vehicles compare this false claim with their OBUs data and messages from others.
These misbehavior activities are referred to as conflicting messages, where multiple messages provide inconsistent or contradictory information. While the work by [
41,
42,
43] addresses the issue of receiving conflicting messages, our focus in this OE process is to accurately generate MRs based on observations in offline mode.
Various possibilities exist for reporting abnormal behavior under emergencies as stated in
Table 3. Outlining these actions is essential to analyze and evaluate the vehicle’s behavior in various scenarios involving emergency events. The table shows different possibilities that vehicles can take based on whether they observe a misbehavior and whether they intend to generate an accurate or false MR.
O(R),
O(NR),
O(NMR), and
O(NNMR) capture different aspects of reporting behavior, including truthful, non-reporting, or malicious reporting. These action estimations become part of the system process to aid in decision-making, wherein the RS assesses incoming MRs and verifies them through RVs, cross-verification, and the likelihood of correct reporting behavior.
We propose a multi-step process called Local Misbehavior Detection Mechanism (LMDM) in order to cope with insider attackers; see
Figure 9. LMDM represents the OE process operating at the local level to detect misbehaviors by analyzing reports and interactions within a localized scope. It is a component of DRAMBR that detects misbehavior locally by directly observing malicious activities (e.g., directly observing a situation incompatible with a received message) or indirectly by receiving conflicting messages, at least one of which must be false. The remainder of DRAMBR concerns storing, reporting, aggregating and integrating the observations. The detection criteria in this phase are as follows:
Message Integrity: Ensure that received messages are not altered.
Communication Frequency: Detect flooding attacks if a vehicle sends an excessive number of messages.
Message Validity: Verify that the content of the message (e.g., location or speed) matches observed reality.
In this stage, preliminary misbehavior reports are generated based on immediate surroundings (direct observation) or V2V communications(Indirect observation). The LMDM outputs serve as inputs to the DRAMBR OR process. LMDM includes five main steps, as explained below:
Detection: The OBUs of the evaluator () actively detect and identify irregularities and potential misbehavior within the network.
Evaluation:
evaluates the misbehavior to decide whether or not to generate an MR for the observed misbehavior event. To confirm the misbehavior through collective evaluation,
communicates with nearby vehicles if any are present. If no other vehicles are available in the area for verification,
proceeds to make an independent decision based on the available evidence. This approach has been discussed in [
44], where vehicles collaborate to validate suspicious activities.
Decision: In this phase, creates the MR based on the gathered information provided by the local misbehavior detection service and optionally of other evidence obtained from other vehicles.
Storage: stores MR to either share it with other nearby vehicles or to submit it later to the RS upon connectivity.
Transmission: After MR creation, decides whether to share the MR with other nearby vehicles or to store it. If multiple MRs are available to send, has to decide which ones to send and in which order.
It is worth noting that during the LMDM process, we add the key functional component State. As outlined in the ITS standards [
17], this is responsible for storing and managing information used by other parts of the system. It manages three key factors:
MR Creation: Allocates processor time and signing resources amid competing demands.
Storage: Ensures MRs fit within available storage, prioritizing critical ones.
Transmission: Manages limited connectivity, prioritizing essential MRs for timely transmission.
In the current state of the ITS standards [
17], every false message
flagged as misbehaving is reported. However, not every
should be separately reported as this would cause a significant network overhead, particularly when the misbehavior is a result of a faulty component in its system. Consequently, the MR format allows for omitted MRs, which means that the
temporarily stops generating repeated MR for the exact
about the same misbehavior after detecting it. Instead, it continues collecting relevant evidence over time. Once enough proof is gathered, the
generates a single, detailed MR to the RS. The protocol assumes that the RS is capable of prioritizing the quality and significance of the MR’s content, rather than just counting how many MRs it receives. This method increases reporting efficiency and decreases redundant communications.
4.4.2. Process 2: Online Reporting (OR)
To send an MR to the RS,
may use different communication channels for reporting
. In line with TS 103 759-V2.1.1 standards [
16], the following communication protocols are considered for establishing the connection between V2I:
DSRC short via RSU or a cellular network link (3G, 4G, or 5G).
A wireless or wired connection at an electric vehicle charging station.
A Wi-Fi hotspot that offers Internet access, such as in a parking lot or a private hotspot at home.
Running the Vehicle On-Board Diagnostic (OBD) port and a diagnostic system at the inspection workshop or service garage.
Upon connectivity, the following subphases have to be performed to complete the precess of the OR.
RS Communication: Vehicles establish a secure connection with the RS either via RSUs or directly. In order to achieve such a communication within the RS during the MR submission process, the ISO 15118 standard requires Transport Layer Security (TLS) function for secure communication between vehicle and infrastructure as mandatory. With TLS handshake, vehicles and RS are securely authenticated, and cryptographic algorithms (cipher suite parameters) are defined. These steps are utilized to generate the TLS master key to encrypt and decrypt the communication messages between vehicles and RS in order to achieve the secure communication (authentication, integrity, and confidentiality).
MR Reporting: Once the communication is verified,
submits the stored MRs. This phase ensures that local evaluations feed into the global trust framework by enabling the RS to update the reputation and notify the rest of the network about the detected misbehaving vehicles. Base on the offline observations discussed in
Table 3, we consider four reporting conditions, as outlined in
Table 4.
We analyze the main containers of the MR format, as previously explained and illustrated in
Figure 5. The MR is composed of three containers:
- (a)
Header: Includes the fundamental data that an MR should have, such as MR generation time, id, id, and MR type.
- (b)
Source: Includes the misbehavior results where the flags the if the received shows implausibilities.
- (c)
Evidence: Contains the
and the
or any
from the neighboring
if believed helpful. The evidence could also include other supported information like a Local Dynamic Map (LDM), or direct sensor data from the
OBUs. The detailed evidence for misbehavior vehicles that required by the MA is further explained in [
13].
Having outlined the main containers of the MR format, we now turn our attention to the expected three versions of the received MR by the RS based on this format:
Base MR: This basic version includes only the header and the source containers without any evidence.
Beacon MR: In this version, the includes a base MR and the suspicious as evidence.
Evidence MR: This version contains a detailed report with more complete misbehavior information depending on the type of plausibility checks failure. For example, if the failed the speed consistency, the includes all related inconsistent in the “Evidence Container” for deeper investigation.
These MR versions support an efficient reporting process by allowing the RS to aggregate evidence and send a single comprehensive MR to the MA, reducing network overhead and ensuring accurate incident tracking.
RS Process: The RS in our system is a centralized entity that collects MRs from reporters, decides on the suitable reaction to make, and evaluates their credibility to determine the reputation scores and forwards verified MRs to the MA for further action;
Figure 10 illustrates the OR process.
We defined three main functions of the RS:
MR Grouping and Structure: The RS collects all the MRs and then adds them to its database. This step would enable access to MRs using specific criteria. For instance, the RS can get all the MRs associated with a certain pseudonym or all the MRs from a specific area. Those requests could be helpful in the analysis phase. Additionally, the RS filtering system aggregates similar MRs, such as those showing speed inconsistencies.
MRs Analysis and RS Actions: The RS analyzes the MRs to output the correct reaction. Correct MRs mostly align with a
past behavior and match the majority consensus, while
often deviates from or contradicts these patterns. Since setting the event ground truth in offline communication is challenging without the infrastructure to confirm the event. The RS establishes the event baseline using the Density Based Spatial Clustering of Applications with Noise (DBSCAN) in its initial classification, which is based on dense regions of consistent MRs with confidence determined by the
RV to ensure reliability. Once the ground is established and the DBSCAN clusters are formed, the RS applies the iForest to detect any anomalies within the DBSCAN outputs further. This step ensures the flagging of the potential
. Furthermore, RS implements the GMM predictions for the accurate classification of honest, malicious, and erroneous reporters to refine the categorization of MRs in order to enhance the classification accuracy. The technical aspects of these steps are discussed in
Section 5.
Table 5 outlines the classification of how incoming MRs are evaluated for potential misbehavior
based on various inconsistencies.
RS Decision: We propose a simple threshold method for the RS to trigger reaction levels based on a flexible MR count. While an accurate misbehavior reaction is still a debated subject in the ITS, we propose three levels of RS reaction for both the
and
as shown in
Table 6.
Blacklist and Penalty Enforcement: Potentially misbehaving vehicles ( and ) will be reported to the MA. If the RV drops below the pre-set threshold, the RS develops a detailed MR and sends it to the MA within the SCMS. The MA decides on the seriousness level of the low RV and utilizes misbehavior detection algorithms to assess the situation and notify all participants regarding the detected misbehavior in two stages of reaction:
- (a)
Passive revocation: The is blocked from requesting new certificates and is temporarily suspended from reporting privileges.
- (b)
Active revocation: The ’s certificates will be revoked, and it is then sent to the CRL. The will be permanently banned from submitting MRs.
4.4.3. DRAMBR Privacy
Integrating our reputation system with the SCMS ensures privacy by employing PCs that hide actual identities. However, the question of when and how a PC change occurs remains unresolved. Numerous techniques have been proposed by scientific investigations to ascertain the location and pace of change in PCs [
45]. Mechanisms that focus on node-centric misbehavior detection (MBD) require a consistent identity to effectively monitor and evaluate the behavior of the
. However, privacy-preserving strategies based on PCs introduce identity changes, complicating accurate tracking. Therefore, considering an appropriate PC change strategy, as outlined below, becomes essential to balancing privacy and accountability within our system.
Random: The PC has a predefined possibility of changing with each outgoing message.
Disposable: The PC is used for a predetermined number of messages, such as beacons and warnings.
Periodical: After a predetermined amount of time, the vehicle changes its PCs.
Distance: After a predetermined number of kilometers, the vehicle changes its PCs.
7. Results and Discussion
Based on the experiments described above, this section evaluates the performance and accuracy of the proposed DRAMBR in identifying false MRs under various scenarios.
First, we present the results obtained from applying DBSCAN. The outputs from this stage are essential for the subsequent stage, where we implement the iForest methodology to evaluate the DBSCAN findings. Following this, we analyze the aggregated results using GMM to finalize the reporter’s behaviors. After that, we implement the combined Random Forest and XGBoost to estimate the effectiveness and the accuracy of the results before moving to the final stage of updating the reputation.
Our findings provide insights into the system’s ability to differentiate between honest and malicious behaviors, showcasing the impact of reputation-based decision-making compared with the existing certificate-based system.
7.1. Results from Applying DBSCAN
DBSCAN serves as the main step when the RS receives a large number of MRs regarding misbehavior in offline scenarios. Its role is to analyze these MRs and group them into clusters that represent consistent patterns.
Figure 14 shows the DBSCAN results of clustering the received 100 MRs in each scenario. In
-
, multiple distinct clusters (blue and green) represent consistent groups of MRs agreeing on the accident’s occurrence. Noise points (yellow) are conflicting MRs that cannot fit into any cluster.
In - and -, clustering is less distinctive, with a higher number of noise points and smaller cluster sizes, indicating higher variability or disagreement among the reporters. This then underscores the difficulties inherent in establishing the truth behind scenarios when reporting is more inconsistent. Below is a detailed explanation of each scenario:
DBSCAN -
In - (Top Figure), DBSCAN identified five clusters; the largest cluster, Cluster 0, contained 65 MRs. In total, it flagged 17 MRs as noise out of 100 MRs, which accounts for 17% of all the MRs. This represents moderate consistency among the since 83% of the MRs fell into meaningful clusters.
DBSCAN -
In the second scenario - (Middle Figure), DBSCAN grouped the MRS into four clusters, with Cluster 0 dominating by including 70 MRs. However, this scenario also shows a high level of noise, with 20 MRs flagged as outliers, accounting for 20% of the total.
DBSCAN -
In the case of the final scenario, - (Bottom Figure), DBSCAN identified three clusters, with Cluster 0 containing 80 reports. Only 8 MRs were classified as noise out of 100 MRs, which means 8% of the total. The scenario shows the highest reliability in reporting, as 92% of the received MRs fall well within the identified clusters, showing strong agreement among the .
Generally, DBSCAN effectively segregates consistent MRs into clusters while identifying noise points for further refinement using Isolation Forest to distinguish between serious anomalies and erroneous reports.
7.2. Results from Applying Isolation Forest
Figure 15 displays the outcomes of the iForest applied specifically to the noise points identified in the DBSCAN step across three scenarios:
-
with 17 points,
-
with 20 points, and
-
with 8 points. Each subfigure illustrates the latitude and longitude of the reports, overlaid with the iForest’s classification results, represented by the outlier scores (color gradient). Blow is a detailed explanation of each scenario:
iForest -
For the first scenario, - (Top Figure), the iForest refined the resulting 17 noise points from DBSCAN by classifying a subset as outliers (blue,) representing 7 MRs, and the rest as inliers (red) with 10 MRs. The outlier percentage in this step is 41.18%, which demonstrates how the iForest narrows down potential serious anomalies within the initially noisy MRs, helping identify MRs that significantly deviate from the expected behavior.
iForest -
For the second scenario, - (Middle Figure), the iForest refined the 20 noise points from DBSCAN by classifying a subset as consistent inliers (red) with 12 MRs, and some MRs that were classified as anomalies (blue) (8 MRs). The outlier percentage in this step is 40%.
iForest -
The third scenario, - (Bottom Figure), exhibits a moderate balance of inliers (red) equal to 5 MRs and outliers (blue) equal to 3 MRs, which represents a percentage of 37.5% of outliers. The iForest aids in detecting misleading or erroneous MRs in a scenario where there might be conflicting observations about the resolution of an event.
The iForest results illustrate the efficacy of further examining noise points to distinguish serious anomalies from less critical deviations. This layered approach enhances the RS reliability of misbehavior detection and decision-making regarding the received MRs.
7.3. Gaussian Mixture
Model (GMM)
To further distinguish between honest, malicious, and system error , the following section discusses the results generated from using the Gaussian Mixture Model (GMM) that focuses on classifying and identifying the .
Figure 16 shows the classification results generated from the Gaussian Mixture Model (GMM) to correctly identify the
types in all three scenarios:
-
,
-
, and
-
. The results categorize
into three types: Honest, Malicious, and Erroneous
.
Honest : In all three scenarios, honest Represented by the (Blue Bars) show The majority of the cases, with approximately 80 for each scenario which indicates a consistent pattern of correct reporting across scenarios.
Malicious : As shown by the orange bars, a small proportion of are identified as malicious. These intentionally submit with a consistent number across scenarios, ranging from approximately 6 to 10.
Erroneous : This is the smallest group, represented by the green bars. These submitted incorrect reports , resulted from an unintentional error and were not indicative of deliberate misbehavior. The false result in these MRs is due to sensor faults or environmental errors. The number of with erroneous data differs slightly, with a minimal count observed in each scenario.
In this classification, we highlight our proposed system’s efficiency in making a difference between honest, malicious, and erroneous behavior, helping the RS maintain accuracy and filter out potentially disruptive or inaccurate reports.
7.4. DRAMBR Accuracy
In this ensemble classification, we evaluate DRAMBR’s accuracy across the three scenarios and the overall system performance. We combine the Random Forest and XGBoost models to analyze the accuracy in two ways:
Noise-Based Accuracy: Focused only on noise points refined by DBSCAN and Isolation Forest. This classification results in 72%.
Full System Accuracy: Includes all MRs (honest and noise) across all scenarios. This evaluation results in 98%.
Figure 17 illustrates the noise-based classification with 72% accuracy from Random Forest and XGBoost, highlighting the performance of the DBSCAN, iForest, and GMM models applied to the noise points.
In
Figure 18, we show the Random Forest and XGBoost results in the three scenarios, achieving a total accuracy of 98%, which measures the effectiveness of the entire system.
The total system accuracy, as shown in
Table 11, reflects the system’s overall ability to classify MRs and
correctly across the three scenarios, including 1—consistent MRs from DBSCAN (True Positives); 2—refined classifications of noise points through iForest and GMM.
The results in
Figure 19 compare accuracy, precision, recall, and F1-score in each scenario and the overall system, highlighting the system’s robustness in distinguishing between honest, malicious, and erroneous reporters across all scenarios.
The (ACD-OCC) scenario demonstrates the system’s ability to achieve perfect classification. The other scenarios (ACD-ABS and ACD-RSL) show consistently high accuracy of approximately 97%, even under varying conditions. The overall accuracy of around 98% indicates that the DRAMBR system correctly classified (approximately) most of the MRs across all scenarios.
It is worth noting that as the system benefits from larger datasets to make more reliable decisions, the system accuracy improves with an increasing scale of reporters, Through our findings, we show how the system effectively distinguishes between honest, malicious, and erroneous reporters. This approach ensures accurate reputation updating for both reporters and targets, enhancing trust and accountability in disconnected vehicular networks, as it ensures reliable decision-making even under constraints.
8. Conclusions
While V2V networks have the potential to improve driving safety, misbehaving vehicles aim to disrupt their communication reliability, particularly in disconnected rural areas with limited infrastructure. Existing systems like SCMS depend on concepts related to MAs and CRLs, which are inadequate in such scenarios. This paper has presented a novel scheme for accurately detecting and classifying misbehavior in V2V networks.
DRAMBR identifies and mitigates misbehavior by using LMDM, which uses local observations and neighboring feedback in offline settings. These observations are consolidated upon connectivity as reports to be submitted to a centralized RS. The remainder of DRAMBR concerns storing, reporting, aggregating, and integrating these reports using existing classification techniques. DRAMBR classification is a multi-stage process, starting with DBSCAN to classify whether multiple reports concern the same event, the iForest to analyze the anomaly, and then Gaussian Mixture Models for probabilistic classification of malicious versus honest behavior. Additionally, Random Forest and XGBoost models are combined to improve decision accuracy.
Our findings demonstrate the effectiveness of the DRAMBR in reducing false reporting which improves decision accuracy, ensures reliable detection of misbehavior, and supports the RS’s ability to maintain system integrity. Adopting the proposed system results in enhanced V2V communication reliability, and ensuring a safer network in infrastructure-limited environments.
While this demonstrates a reasonable level of reliability in distinguishing between honest, malicious, and erroneous reporters, there remains room for improvement. For example, when up to half of the vehicles are malicious or traffic conditions are highly dynamic, DRAMBR’s robustness will be affected. Future work will focus on enhancing feature engineering, addressing any class imbalances, and fine-tuning model hyperparameters to further increase the classification performance and ensure more precise identification of reporter types.