Next Article in Journal
A Survey on Edge Computing (EC) Security Challenges: Classification, Threats, and Mitigation Strategies
Next Article in Special Issue
Cybersecurity Intelligence Through Textual Data Analysis: A Framework Using Machine Learning and Terrorism Datasets
Previous Article in Journal
Multimodal Fall Detection Using Spatial–Temporal Attention and Bi-LSTM-Based Feature Fusion
Previous Article in Special Issue
C3: Leveraging the Native Messaging Application Programming Interface for Covert Command and Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR)

by
Dimah Almani
*,
Tim Muller
and
Steven Furnell
Cyber Security Research Group, School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(4), 174; https://doi.org/10.3390/fi17040174
Submission received: 31 December 2024 / Revised: 26 March 2025 / Accepted: 11 April 2025 / Published: 15 April 2025

Abstract

:
Vehicle-to-Vehicle (V2V) communications technology offers enhanced road safety, traffic efficiency, and connectivity. In V2V, vehicles cooperate by broadcasting safety messages to quickly detect and avoid dangerous situations on time or to avoid and reduce congestion. However, vehicles might misbehave, creating false information and sharing it with neighboring vehicles, such as, for example, failing to report an observed accident or falsely reporting one when none exists. If other vehicles detect such misbehavior, they can report it. However, false accusations also constitute misbehavior. In disconnected areas with limited infrastructure, the potential for misbehavior increases due to the scarcity of Roadside Units (RSUs) necessary for verifying the truthfulness of communications. In such a situation, identifying malicious behavior using a standard misbehaving management system is ineffective in areas with limited connectivity. This paper presents a novel mechanism, Distributed Reputation for Accurate Misbehavior Reporting (DRAMBR), offering a fully integrated reputation solution that utilizes reputation to enhance the accuracy of the reporting system by identifying misbehavior in rural networks. The system operates in two phases: offline, using the Local Misbehavior Detection Mechanism (LMDM), where vehicles detect misbehavior and store reports locally, and online, where these reports are sent to a central reputation server. DRAMBR aggregates the reports and integrates DBSCAN for clustering spatial and temporal misbehavior reports, Isolation Forest for anomaly detection, and Gaussian Mixture Models for probabilistic classification of reports. Additionally, Random Forest and XGBoost models are combined to improve decision accuracy. DRAMBR distinguishes between honest mistakes, intentional deception, and malicious reporting. Using an existing mechanism, the updated reputation is available even in an offline environment. Through simulations, we evaluate our proposed reputation system’s performance, demonstrating its effectiveness in achieving a reporting accuracy of approximately 98%. The findings highlight the potential of reputation-based strategies to minimize misbehavior and improve the reliability and security of V2V communications, particularly in rural areas with limited infrastructure, ultimately contributing to safer and more reliable transportation systems.

1. Introduction

In recent years, the advancement of Intelligent Transport Systems (ITS) has led to significant research in the field of Vehicular Ad hoc Networks (VANETs). The U.S. Department of Transportation (U.S. DOT) highlights the need to ensure that connected vehicles operate in secure, safe, and privacy-protective networks [1]. A security system is required as vehicles exchange critical information with other vehicles, wireless mobile devices, traffic management centers, and roadway infrastructure. VANETs, as shown in Figure 1, play a vital role in enabling seamless real-time information exchange between vehicles to improve safety, optimize traffic flow, and enhance the efficiency of the overall driving experience.
However, the reliability of information sharing between vehicles becomes increasingly challenged in disconnected areas characterized by sparse infrastructure and where connectivity is intermittent. In these scenarios, the probability of malicious actions occurring without detection is increased, which leads to hazardous and undesirable situations. Thus, detecting and reporting misbehavior in such conditions is crucial and requires an accurate scheme that works offline without relying on an infrastructure. Vehicles need a reliable way to autonomously assess the trustworthiness of information based on factors like reputation scores.
A further difficulty is differentiating between genuine system errors, such as those resulting from imperfect GPS data, and intentional misbehavior, a task that necessitates highly precise validation processes. Given the real-time requirements of vehicular networks, balancing data accuracy and computational efficiency poses a dilemma. In addition, the emphasis on privacy protection using certificates to maintain anonymity complicates the process of misbehavior detection.
Existing standards for misbehaving detection and reporting rely on the Misbehavior Authority (MA) in the Security Credential Management System (SCMS) [3]. In this system, if enough misbehavior is reported for a certain vehicle, the vehicle certificates will be revoked and added to the Certification Revocation List (CRL). This will be updated and distributed to other vehicles in the environment. Once other vehicles identify that a message came from a vehicle on the CRL, it will no longer be considered a trusted node for sending and receiving messages [4]. However, in the implementation of SCMS in disconnected vehicular networks, two primary challenges arise, as depicted in Figure 2.
  • Firstly, maintaining and synchronizing the CRL is crucial for identifying the misbehaving vehicles. The CRL must be constantly updated and shared with all vehicles, a process that requires regular access to network infrastructure, typically via Roadside Units (RSUs). This becomes problematic in areas with limited connectivity as the CRL grows with the number of misbehaving vehicles, necessitating frequent online updates.
  • CRL size and retrospective unlinkability: The SCMS is responsible for revoking misbehaving or malfunctioning vehicles’ Pseudonym Certificates (PCs). However, putting all valid vehicle certificates on the CRL would make it very large. The system needs an efficient scheme to perform the revocation without revealing the PCs used by the vehicle before it started misbehaving.
As a result, vehicles could be using expired or invalid certificates that might call into question the security, trustworthiness, and accuracy of the misbehavior reports they transmit. To address these challenges, we need a system that ensures vehicles detect and report misbehavior accurately and maintain privacy while establishing trustworthiness, even without constant internet connectivity.
Reputation systems have been proposed as a means to strike a balance between anonymity, security, and trust, especially in offline settings. In this paper, we propose a Distributed Reputation mechanism for Accurate vehicle Misbehavior Reporting (DRAMBR). This mechanism accurately classifies misbehaviors and distinguishes between honest, erroneous, and malicious vehicles, ensuring timely and reliable reputation updates. We introduce a new entity, the Reputation Server (RS), which provides the Reputation Value (RV). The RS will be linked to the SCMS. Reputation considers the behaving subject through accrued observations and interactions; therefore, providing a vehicle with an indication of the likelihood that its peer is well-behaving or malicious through an RV. DRAMBR becomes particularly relevant in contexts where direct authority oversight is limited or when vehicles cannot operate in a fully connected manner.
DRAMBR improves the accuracy of vehicle misbehavior reporting using two processes of assessment:
  • Offline Misbehavior Detection: During offline communication, a vehicle detects misbehavior, collects observations from neighbors, and generates an Misbehavior Report (MR).
  • Online RS Processing: Upon reconnection, the MR is sent to the RS, which performs validation steps, aggregates data, and takes appropriate action on the reporter and target vehicles.
DRAMBR processes and evaluates MRs through a multi-stage aggregation process integrating advanced classification techniques such as DBSCAN, Isolation Forest, and Gaussian Mixture Model. We analyze vehicle communications under different conditions in disconnected environments using Simulation of Urban MObility (SUMO). DRAMBR’s accuracy performance is then evaluated using Random Forest and XGBoost, suggesting that it provides an accurate reputation management approach under challenging conditions.
The remainder of this paper is organized into seven further sections. Basic assumptions and background about the technologies and standards adopted in our system are introduced in Section 2. Related work is discussed in Section 3 by presenting previous work related to misbehaving detection and reputation schemes in vehicular contexts. The system model is introduced in detail in Section 4, leading to Section 5, which explains the technical implementations of the proposed DRAMBR. Section 6 then outlines the experiment description and the simulated setup, and Section 7 discusses the results and shows the evaluation of our scheme. Finally, we draw conclusions in Section 8.

2. Background and Foundations

This section presents a brief description of some concepts that are connected to our work and offers an overview of the challenges associated with rural areas. Following that, we provide an explanation of the critical role of MA in the SCMS. Next, a description of the attack activity adopted in this study is clarified. Finally, we conclude with a discussion on the MR standards and requirements.

2.1. Characteristics of Disconnected Areas

As  Vehicle-to-Vehicle (V2V) communications enable direct interaction between nearby vehicles, in disconnected areas, the internet connectivity is limited or unavailable, which affects critical communications among vehicles. This factor makes the task of maintaining continuous connectivity between vehicles in such conditions a very difficult challenge. This is commonly the case in non-residential areas and outside urban locations: tunnels, mountains, or remote locations where RSUs are not deployed. To illustrate, while GSM communications technologies are certainly widespread, they are not universally accessible due to geographic limitations. For example, the GSMA website reveals that the coverage is not always guaranteed, and it is easy to find many areas in different countries where there is road infrastructure but a lack of 2G coverage (let alone coverage from 3G or higher networks). Meanwhile, although satellite communications can offer greater coverage, they do so with limited capacity, higher latency, and at a cost that may not be considered viable.
According to [5,6], about 20% of people live in rural areas in the United States and Canada, compared with 56% in the European Union (EU) and 60% in China. These low-signal areas have structural or geographical attributes that affect the wireless signals from propagating properly and thus cause weak or no connectivity between vehicles and infrastructures [7]. The lack of connectivity in these areas is not just a function of physical obstructions but is tied to ecosystem triggers and infrastructural determinants as well [8].
As an example, this study focused on vehicular communication in the Peak District, Figure 3, a National Park in central England, aiming to measure the impact of infrastructures in rural regions with limited RSU density. The figure indicates various road routes and intersections but a lack of infrastructure. Moreover, the roads are often narrow and winding, increasing the potential for encountering hazards. For example, Monyash Road (A623) connects scattered landmarks like “Inside Out Photography” and nearby farms or fields situated far apart with no visible supporting infrastructure. This highlights the disconnection and challenges in linking isolated points in the region.
This lack of connection extends beyond the technology portion as it has real-world ramifications for vehicle safety and operational efficiency. The real-time data transmission and reception of safety messages have been greatly affected because they can not make vehicles communicate to RSUs or trusted authorities on a regular basis. Impacts on vital functions from traffic management, emergency response coordination, and navigation updates. As such, the latency and packet loss that can result translates to longer message travel times and more potential for traffic accidents while on roadways. This is due to the connectivity limitations in these challenging scenarios [9].

2.2. Misbehavior Authority in the SCMS

The SCMS comprises different communication channels due to the interaction between multiple entities, the Registration Authority (RA), the Certificate Authority (CA), the Pseudonym Certificate Authority (PCA), and the (MA), each with a distinct responsibility in managing certificates, such as handling digital certificate issuance, renewal, and revocation functions. In Europe, the European Telecommunications Standards Institute (ETSI) develops standards for the security of ITS, serving similar purposes to the SCMS, which provides certificates for messages and privacy protection for users using PKI. While both SCMS and ETSI ITS-G5 are designed for securing vehicular communications, SCMS has several key advantages compared with the alternative: broad applicability, robust misbehavior reporting, hierarchical trust management, and strong privacy from pseudonym certificates [10,11]. These features make SCMS more suitable for the mitigation of malicious behavior, hence closely aligning with the research objectives in ensuring trust.
In the SCMS, the MA places a strong emphasis on the identification and reporting of misbehavior. The main role of the MA is to collect the misbehavior reports generated locally by vehicles, which provide the SCMS with information that can be used to determine whether a vehicle is not performing at the appropriate level of communication. Misbehavior detection entails the (MA) being capable of discerning if multiple misbehavior reports reference the exact vehicle [12]. This process also necessitates the MA to compile information for publication in a CRL to invalidate a vehicle’s certificates. Furthermore, the MA must inform the RA with the necessary data to enable blacklisting, preventing the misbehaving vehicle from obtaining new certificates. The SCMS architecture mandates that the following components collaborate to support misbehavior detection and introduce a system of checks and balances:
  • Step 1:The MA, PCA, and one of the Linkage Authorities (LA) must collaborate to reconstruct linkage information.
  • Step 2: The MA, PCA, RA, and both LAs must collaborate to generate revocation information for the CRL.
  • Step 3: The MA, PCA, and RA must collaborate to identify the enrollment certificate of the misbehaving vehicle, which the RA will then include in its blacklist.
The MA conducts step 1 during the misbehavior investigation to ascertain whether a vehicle or set of vehicles engaged in misbehavior. Subsequently, after marking a vehicle as misbehaving, the MA proceeds with steps 2 and 3 during revocation to deduce the revocation information for the CRL and the enrollment certificate to be added to RA’s internal blacklist.
As shown in Figure 4, the process starts when a certificate is flagged as compromised or otherwise untrusted. The MA responsible for SCMS revokes the certificate and publishes its details to the CRL. Linkage IDs and timing information are published on the CRL and distributed to revoke all PCs of a given vehicle over a given period. The list is then broadcast across the entire vehicular network so that all vehicles and infrastructure units know these revoked certificates. Upon connectivity, the updated CRL is periodically downloaded by vehicles from the RSU or other secure distribution points. Every time a vehicle tries to communicate across the network, it validates certificates of the other parties against this CRL. This communication is blocked if the certificate appears on a list of compromised or unordered certificates, defending against possible influence from malicious and/or fraudulent sources out to communicate with other vehicles.
The current standard for distributing CRLs involves sending them to each vehicle through various channels such as RSEs, cellular networks, satellite communications, or customer Wi-Fi [13]. An alternative approach is implementing a collaborative distribution model outlined in a preliminary manner by [14]. In this model, specific vehicles are equipped with CRLs (via RSEs, cellular networks, or other methods) and transmitted to neighboring devices as they pass by during their everyday operations. Vehicles that have received the CRLs become distributors, enabling comprehensive coverage of the entire system. This is essential to maintaining the integrity of V2V communications, allowing only trustworthy vehicles that are authenticated and authorized to participate in that network. However, the effectiveness of this process rests on a relatively current and properly circulated CRL; delays in updating or distributing an updated list can lead to security weaknesses that allow adversaries with revoked certificates to exploit the process.

2.3. V2V Attack in Disconnected Areas

In V2V, a malicious vehicle is an attacker that behaves differently from other normal vehicles in the vehicular network. The attacker can misbehave by launching a wide range of attacks to gain unauthorized access to the system for their own interest. The misbehavior becomes dangerous during emergencies in disconnected areas where there is no way to perform the verification. In such a situation, sensitive information (such as collision avoidance) is expected to be shared among vehicles. Tempering such communication with false messages prohibits legitimate vehicles from receiving the correct information on time, which results in a severe impact on the network. This activity, as explained by [15], is known as a False Alert Attack, where an attacker misleads a vehicle or group of vehicles by broadcasting a false message indicating the opposite of the correct message. For example, the attacker alters the correct message that indicates a serious road condition eg an accident, or creates a false message to disrupt the normal operation of the traffic. These types of false alerts lead to undesirable consciences in the network such as jamming and collisions.
Algorithm 1 presents the high-level pseudocode which can be seen that whenever a message arrives, the attacker checks the message, and does the following: Forwards the false message, ignore the correct message, or create a false message if the received message is correct. Then, the attacker broadcasts the false message which is then received by the legitimate vehicles in its vicinity.
Algorithm 1 False Alert Attack
Input: Received Message M
Output: Attacked Message M F
  1:
 if received message = =  M then
  2:
      if content of M  = = “correct data” then
  3:
          Choose action:
  4:
               1. Ignore the correct message M
  5:
               2. Generate and broadcast a new M F :
  6:
                   Create False Message M F
  7:
                   Broadcast M F
  8:
      else
  9:
          Forward the incorrect message M
10:
    end if
11:
else if received message ! =  M then
12:
     Generate and broadcast a new M F :
13:
     Create False Message M F
14:
     Broadcast M F
15:
end if

2.4. Misbehavior Report (MR)

In vehicular networks, the exact format of an MR is not yet fully defined. However, the ETSI standards provide a structured framework for securely reporting malicious activities. The MR format, as outlined in the guidelines of TS 103 759-V2.1.1-ITS [16], ensures the confidentiality, integrity, and authenticity of the data through encryption and digital signatures. This approach guarantees that reports of misbehaving vehicles are securely transmitted and verified within the network, helping to maintain the trust and reliability of the system.
Once misbehavior is detected, the vehicle has the opportunity to signal the misbehavior by sending an MR later to the RS. The generated MR contains critical information stating the suspicious and related messages, a linked PC, a description of the misbehavior type, as well as the reporter’s PC and the corresponding signature from the time of report creation. Figure 5 shows the structure of the MR format, as specified in ETSI, ITS.
Table 1 illustrates the structure of the MR.
The preceding explanation of the MR structure sets the stage for discussing the functional requirements of the MR as defined by the IEEE Std 1609.2™ [17]. These requirements should be achieved to ensure the secure and effective collection of the reports.
  • The MR format shall be flexible to allow for any updates from other vehicles.
  • The identities of both the reporter and the reported (target) vehicles should be included in the MR.
  • Each MR must include evidence, such as the original received message and any relevant observations from other vehicles, which enables the MA to independently verify the MRs.
  • To avoid overloading the transmission channel, each reporter is limited to submitting only one report for each vehicle at a specific time.
In addition, the following security and privacy requirements have to be met:
  • Privacy protection: To avoid linkability, vehicles should rely on the PC so that the MA cannot link the short-term and the long-term identities of the reporters and reported vehicles.
  • Authentication and integration: To ensure the authenticity and integrity of the exchanged information, each MR should be signed with the private key corresponding to the verification public key of the reporter.
  • Confidentiality: Each MR should be encrypted to protect the confidentiality of the information sent to the MA.
  • Non-repudiation: To allow the MA to verify the misbehavior, the MR should provide sufficient evidence, including the suspicious messages and the associated pseudonyms.

3. Related Work

This section analyzes the use of reputation systems in misbehavior reports and feedback mechanisms conducted in recent studies to maintain network trustworthiness and security in both connected and disconnected areas. In recent years, there has been significant research interest in using reputation systems to minimize malicious behaviors for trustworthy V2V communications. To illustrate, reputation-based malicious vehicle detection systems have been devised and are gaining popularity to effectively deal with the threat of malicious vehicles in vehicular networks. The source vehicle determines whether a vehicle is dangerous based on its reputation score and then finds a trusted communication path. A key point to remember is that V2V reputation research is categorized into centralized and decentralized models.
The centralized approach, pioneered by [18], revolves around a scheme that centrally distributes, updates, and stores vehicles’ reputation scores. That study introduces a reputation announcement scheme for VANETs using Time Threshold to assess message reliability. The researchers in [19] recently proposed a centralized system for highways and urban roads, relying on a central Trusted Authority to calculate feedback scores from various vehicles and update the target’s reputation. Moreover, Ref. [20] proposed an incentive provision method in which the RSU updates the sender’s reputation score based on observed actions validated by vehicles.
In contrast, distributed reputation systems operate without dependence on infrastructure. In this model, vehicles autonomously collect, maintain, and update reputation scores in an ad hoc manner. The authors in [21] developed a node reputation system to evaluate the reliability of vehicles and their messages. They grouped vehicles with similar mobility patterns that are close to each other into platoons to minimize propagation overhead. In 2021, Kudva et al. [22] introduced a framework for self-organized vehicles that filters out malicious vehicles based on standard scores. In 2014, Cao et al. presented a multi-hop version that utilized the carry-and-forward method [23]. The ideas aim to assess the message’s reliability and the aggregation of reputation scores. In contrast, Katiyar et al. proposed a single-hop version that employed a single-hop reputation announcement [24]. However, because messages and feedback are linked and not anonymous, these schemes do not provide enough privacy protection. As a result, an attacker can carry out a traceability attack and learn the path of a target vehicle. Taking into account the vehicle’s reporting history, these reputation systems improve the accuracy of accident detection.
The work by Jesudoss introduced a dynamic reputation scheme based on events that assigns reputation value according to the vehicle’s behavior [25]. Reputation values are awarded to each vehicle that sends messages that corroborate with other vehicles, which increases the vehicle’s trust degree with each verified message. However, this scheme is insufficient in mitigating the threat posed by Sybil attacks, where a malicious vehicle might have multiple fake identities to send false messages.
Two-pass validation and a two-phase transaction were included in the blockchain-based traffic event validation that the authors of [26] suggested. They employed a consensus mechanism referred to as a Proof of Event, which employs two criteria. Vehicles send alerts to the RSU, which only takes them for a set amount of time. The RSU enters the notification phase once the number of alerts exceeds the first threshold. With the assistance of approaching vehicles, it can validate the alert, add it to the local blockchain, and use multi-hop transmissions to notify neighboring vehicles about the incident. Once all RSUs have agreed that the incident is correct and have reviewed the supporting evidence and reports, the RSU notifies the other RSUs in the same zone. The occasion will be appended to the worldwide blockchain, including all local happenings. Vehicles for event verification can access the public global blockchain. Unlike previous consensus methods, Proof of Event uses timestamps to select the block submitter, which reduces power usage. However, based on their proposal, the size of the global blockchain is enormous since it might encompass all the events in a very large geographic area, meaning that a significant number of events are likely to be added to the blockchain every day. Furthermore, their work lacks mutual authentication details. Out of all alert submitters, 40% of them are internal attackers, resulting in a 100% false event success rate.
In 2023, the work by [9] presented the trust model that employed an ID-based signature, a Hash Message Authentication Code (HMAC), and an RSA-based method to detect malicious and incorporated messages. The authors in [27] suggested an elaborate reputation approach that considers the message’s reliability and the sending vehicle’s participation. Every vehicle is tracked, and based on the conduct of the watched vehicle, a trust score is assigned. However, these solutions still have several shortcomings and limitations, such as the fact that they are only helpful for limited types of attacks, a high level of computational complexity, and a lack of scalability.
The proximity-based approach prioritizes accident messages from the vehicle closest to the accident location [28]. Their method improves accident detection accuracy and successfully reduces the influence of conflicting messages. However, rather than taking into account other factors, their approach focuses mostly on using proximity as a factor in dispute resolution. In 2021, Vaiana et al. [29] introduced a hybrid approach that combines reputation values, proximity analysis, and severity assessment for evaluating accident reports. Their study shows the effectiveness of considering severity in reconciling conflicting messages and improving accident detection accuracy.
In 2023, the work by [30], proposed a cooperative scheme to detect and prevent false emergency messages, improving V2V reliability with minimal computational overhead. In 2024, Yang et al. [31] suggested the use of a reputation server as a reputation authority to generate reputation certificates; this approach is referred to as certified reputation, which was first proposed in [32]. Samara and Alsalihy [33] proposed a new reputation mechanism to identify malicious vehicles in V2V. This mechanism issues Valid Certificates (VCs) or Invalid Certificates (ICs) status for each vehicle and allows the vehicle to make a decision based on the sender’s certificate status. The authors address the delays, overhead processing, and channel interference associated with CRL. However, this study ignored the use of traditional (SCMS), which is an essential part of V2V communication.
In the context of misbehavior detection in vehicular networks, various machine learning techniques have been researched and explored to improve misbehavior detection and reputation evaluation. DBSCAN, as a density-based clustering algorithm, has been one of the practical approaches in detecting anomalies and setting the ground truth by identifying dense clusters against outliers. The work by [34] indicated the effectiveness of the DBSCAN in black hole attack detection by integrating it with decision trees, showing its capability to isolate malicious behavior. Similarly, the Isolation Forest has also been an efficient anomaly detection technique; for example, in 2020, Ripan et al. [35] applied iForest for cyber anomaly classification by leveraging its ability to isolate outliers through recursive partitioning. Further extended this concept with Deep Isolation Forest, incorporating deep learning to refine anomaly detection accuracy, as shown in [36]. Combining this classification further shares several similarities with DRAMBR. In our proposed system, DBSCAN is implemented to cluster misbehavior reports and set the ground truth based on the reporters’ reputations. At the same time, iForest is further used to identify malicious versus genuine reporters by highlighting outlier patterns.
In addition to anomaly detection, classification models such as GMM, XGBoost, and Random Forest further make reputation evaluation more accurate. The authors in [37] proposed a GMM-based classification approach that effectively models uncertain and probabilistic distributions—a valuable tool to differentiate misbehavior patterns in DRAMBR. Moreover, the XGBoost classifier has been widely recognized due to its powerful predictive capabilities, as proven by [38], the author applied XGBoost across multiple datasets for classification and decision-making. In DRAMBR, the XGBoost algorithm is proposed to improve accuracy in misbehavior classification from past reputation data. In [39], Random Forest is integrated with the XGBoost to offer ensemble learning features that make a decision more robust by reducing the bias and variance in classification to ensure a proper evaluation of a misbehavior report.
In response to ongoing research efforts to develop a reliable and trustworthy V2V communication system that utilizes reputation from neighboring vehicles, this study addresses the challenge of detecting dishonest vehicles within disconnected environments. We present the relevant literature review for centralized and decentralized reputation systems in the proposed DRAMBR framework. Centralized systems, for instance proposed in [18,19], lacking scalability and connectivity issues, can implement reliable aggregation with consistent threshold management. In contrast, scalability is better in a decentralized system like [22,25,33] but lags in consistency over trust. Recognizing the limitations imposed by network disconnectivity, where existing standards may be insufficient, the proposed DRAMBR framework, designed in alignment with SCMS standards [17], addresses these challenges by integrating dynamic reputation updates and advanced clustering techniques to assess the credibility of received messages before submitting the feedback reports to the centralized server, thereby enhancing the accuracy and reliability of the reporting system.

4. System Model

The proposed DRAMBR represents a fully integrated reputation solution designed to manage the reporting process effectively under challenging conditions in V2V communication. The aim is to enhance the trustworthiness of V2V communications by monitoring vehicle platoon behavior, detecting any misbehavior, and reporting it back to a central RS that assigns and periodically updates their RVs to demonstrate their trustworthiness.
The system operates in two primary phases: Offline Evaluation (OE) and Online Reporting (OR): In the OE, vehicles assess each other’s behavior and store MR locally without central connectivity, enabling continuous trust management. In the OR phase, upon reconnecting to the Internet, the vehicles send the accumulated MRs to the RS. The RS aggregates, classifies, and analyzes the MRs, reducing processing overhead.
The system can be implemented in both urban (online) and rural (offline) areas. In urban areas, it reduces computational complexity on the RS by decentralizing the RV, thereby improving scalability and efficiency. In rural areas, vehicles benefit from sharing their RVs without relying on a central authority, ensuring continued reliability even in disconnected scenarios.
In this section, we first outline the main system assumptions. Next, we explain the vehicle behaviors highlighting the threat model relevant in this study. Following that, we introduce the DRAMBR framework. These explanations set the stage for discussing the main phases of DRAMBR.

4.1. DRAMBR Assumptions

To ensure a realistic communication network, our study relies on assumptions about vehicle capabilities, behaviors, and system conditions. These assumptions reflect practical and feasible conditions in real-world vehicular networks. The key assumptions are outlined below:
  • Pseudonymity: Vehicles interact via pseudonyms and communicate over an anonymous network.
  • Connectivity setup: In the OE phase, vehicles communicate out of range of network connectivity, hence, the periodic synchronization with the network to obtain the latest CRLs and RVs is limited.
  • Reputation setup: Each vehicle is initialized with an RV. The RV will increase with positive behavior and decrease with misbehavior.
  • Independent behavior: Each vehicle can behave either honestly or misbehave. The vehicle’s behavior is independent of others such that the actions of vehicle V i do not influence the behavior of vehicle V j for i j .
  • Communication range: Vehicles communicate using Dedicated Short-Range Communication (DSRC) technology that has been developed specifically to provide reliable and effective communication in a range of (300–900 m), ensuring only nearby vehicles can interact even without connectivity.
  • OBUs detection: Each vehicle is equipped with On-Board Units (OBUs) that detect any abnormal activity or irregular behavior.
  • Messages checking: Each vehicle has a mechanism to check its outgoing messages and detect any misbehavior before they are transmitted.
  • Limited misbehavior reporting: Not all observed activities of misbehavior result in generating and transmitting the MR. The decision-making process regarding whether a vehicle generates an MR based on observed misbehavior is specific to the vehicle’s implementation.

4.2. Vehicle Behavior in OE and OR

Vehicles can act very differently on a network depending on their intentions and strategy. While an honest vehicle always follows protocols and truthfully reports, malicious ones may exhibit dynamic behavior, for instance, by deliberately causing harm in the system, such as sending misleading information or behaving strategically honestly with the intent of gaining confidence to get undetected, their actions will be challenging to anticipate and control. Generally, trust and reputation-based systems are exposed to two main behaviors listed below:
  • Honest:
    1—In V2V, create, broadcast, or forward correct messages M s g s (OE Phase).
    2—In V2I, create and submit correct MRs (OR Phase).
  • Malicious:
    1—In V2V, ignore the correct messages M s g s or create and broadcast false M s g s (OE Phase).
    2—In V2I, create and submit false MRs (OR Phase).
The system makes a distinction between intentional and unintentional misbehavior, see Figure 6, with the latter encompassing all vehicle faults and error scenarios. While we take into account all these considerations, the primary focus is on the accuracy of the MR submission and the behavior of the reporters. Specifically, we analyze the false MR ( M R f ) causes and how the reporter’s reliability impacts the overall trustworthiness of the system.
A significant threat scenario arises by an MR Attack ( M R f Reporter ), when the attacker with high RV manipulates the reporting system by not reporting the misbehavior and generates a M R f for an honest vehicle, as illustrated in Figure 7. This type of activity is similar to the badmouthing attack [40], which might result in assigning a high RV to a vehicle that deserves a lower one and a low RV to one that deserves a higher one. This tactic inflates the reputation of certain vehicles, making them seem more trustworthy while deflating the reputation of others and harming their credibility. Such rating manipulation distorts the accurate feedback, misleads other vehicles, and undermines trust in the platform’s rating system.
The M R A t t a c k in Algorithm 2 shows the attack activity considered in this study. Assuming V j is behaving honestly, the attacker targets V j by generating a false misbehavior report ( M R f ). The ( M R f ) is generated only if the status of the target vehicle is not already flagged as misbehaving ( S ( V j ) misbehaving ). However, if the target vehicle is already flagged as misbehaving, no MR is sent, and the attack halts.
To mitigate such an attack, our system follows a thorough process of comparison and aggregation, ensuring a more accurate evaluation as illustrated in the following sections.
Algorithm 2  M R Attack
Input:  V j , V M , S ( V )
Output:  M R f or Null
1:
if  S ( V j ) misbehaving  then                                               ▹ Target is honest
2:
     M R f GenerateMR ( V j ) + AttachCertificate ( P C )
3:
    Send M R f to R S
4:
else
5:
     M R f Null                                         ▹ No MR sent for misbehaving vehicle
6:
end if

4.3. DRAMBR Framework

The DRAMBR framework is shown in Figure 8. We introduce a new entity, the RS, which provides the RV. The RS is linked to the MA in the SCMS. During the reputation retrieval process, the RS will pre-sign the RV using the Pre-Signature scheme proposed in [2]. The RV is then sent to the requested vehicle to complete the signature and attach it to the message with the PC.
All MRs are submitted to the RS, which checks and evaluates all the received MRs. In this context, we set a threshold for accepting the MR as τ R e p , and we denote the vehicle reporting the misbehavior as ( R e p Veh ) and the target vehicle as ( T a r Veh ). Our reputation system is designed to consider the MRs if they meet τ R e p and ignore any MR from R e p Veh with a R V < τ R e p .
The system’s efficiency is highlighted by its ability to handle contradictions between the MRs accurately. If most MRs contradict a specific MR, indicating the same misbehavior, the RS does not immediately classify this as malicious reporting. Instead, it evaluates the context to determine whether the discrepancy is due to an error or potential attack activity to make a fair and accurate decision while maintaining system reliability.
To set the stage for the DRAMBR valuation process, we first outline the main entities involved, as shown in Table 2.

4.4. Workflow and DRAMBR Phases

As shown in Figure 8, DRAMBR begins with the registration phase, the initial phase. In this phase, the vehicle will be connected to the infrastructure (RSU or RS) to establish a unique credential and upload the PCs as well as retrieve the RV. We divide the overall process into two broad stages, which are OE and OR, each further divided into subphases to present an integrated security and trust management for V2V communication.

4.4.1. Process 1: Offline Evaluation (OE)

This phase occurs offline, where vehicles evaluate the accuracy of road status information within the offline network. Vehicles monitor each other’s behavior and trustworthiness in an ad hoc manner, without relying on trusted authorities to identify potential misbehavior. Before diving into the main steps in the OE process, we first explain the critical events that trigger the local detection and evaluation mechanisms within the system. The trigger event O ( M ) for identifying misbehavior occurs when the T a r Veh ( V j ) transmits contradicts observable conditions, which is the basic event that drives the system responses. O ( M ) could indicate misbehavior in two possible activities:
  • Failure Alert Transmission: When V j detects an incident (e.g., accident) but does not transmit it, nearby observing vehicles ( R e p Vehs ) may detect this omission through their OBUs or reports from other vehicles.
  • False Alert Transmission: When V j transmits an emergency when no accident or hazard exists, observing vehicles compare this false claim with their OBUs data and messages from others.
These misbehavior activities are referred to as conflicting messages, where multiple messages provide inconsistent or contradictory information. While the work by [41,42,43] addresses the issue of receiving conflicting messages, our focus in this OE process is to accurately generate MRs based on observations in offline mode.
Various possibilities exist for reporting abnormal behavior under emergencies as stated in Table 3. Outlining these actions is essential to analyze and evaluate the vehicle’s behavior in various scenarios involving emergency events. The table shows different possibilities that vehicles can take based on whether they observe a misbehavior and whether they intend to generate an accurate or false MR. O(R), O(NR), O(NMR), and O(NNMR) capture different aspects of reporting behavior, including truthful, non-reporting, or malicious reporting. These action estimations become part of the system process to aid in decision-making, wherein the RS assesses incoming MRs and verifies them through RVs, cross-verification, and the likelihood of correct reporting behavior.
We propose a multi-step process called Local Misbehavior Detection Mechanism (LMDM) in order to cope with insider attackers; see Figure 9. LMDM represents the OE process operating at the local level to detect misbehaviors by analyzing reports and interactions within a localized scope. It is a component of DRAMBR that detects misbehavior locally by directly observing malicious activities (e.g., directly observing a situation incompatible with a received message) or indirectly by receiving conflicting messages, at least one of which must be false. The remainder of DRAMBR concerns storing, reporting, aggregating and integrating the observations. The detection criteria in this phase are as follows:
  • Message Integrity: Ensure that received messages are not altered.
  • Communication Frequency: Detect flooding attacks if a vehicle sends an excessive number of messages.
  • Message Validity: Verify that the content of the message (e.g., location or speed) matches observed reality.
In this stage, preliminary misbehavior reports are generated based on immediate surroundings (direct observation) or V2V communications(Indirect observation). The LMDM outputs serve as inputs to the DRAMBR OR process. LMDM includes five main steps, as explained below:
  • Detection: The OBUs of the evaluator ( V i ) actively detect and identify irregularities and potential misbehavior within the network.
  • Evaluation: V i evaluates the misbehavior to decide whether or not to generate an MR for the observed misbehavior event. To confirm the misbehavior through collective evaluation, V i communicates with nearby vehicles if any are present. If no other vehicles are available in the area for verification, V i proceeds to make an independent decision based on the available evidence. This approach has been discussed in [44], where vehicles collaborate to validate suspicious activities.
  • Decision: In this phase, V i creates the MR based on the gathered information provided by the local misbehavior detection service and optionally of other evidence obtained from other vehicles.
  • Storage: V i stores MR to either share it with other nearby vehicles or to submit it later to the RS upon connectivity.
  • Transmission: After MR creation, V i decides whether to share the MR with other nearby vehicles or to store it. If multiple MRs are available to send, V i has to decide which ones to send and in which order.
It is worth noting that during the LMDM process, we add the key functional component State. As outlined in the ITS standards [17], this is responsible for storing and managing information used by other parts of the system. It manages three key factors:
  • MR Creation: Allocates processor time and signing resources amid competing demands.
  • Storage: Ensures MRs fit within available storage, prioritizing critical ones.
  • Transmission: Manages limited connectivity, prioritizing essential MRs for timely transmission.
In the current state of the ITS standards [17], every false message M s g flagged as misbehaving is reported. However, not every M s g should be separately reported as this would cause a significant network overhead, particularly when the misbehavior is a result of a faulty component in its system. Consequently, the MR format allows for omitted MRs, which means that the R e p Veh temporarily stops generating repeated MR for the exact T a r Veh about the same misbehavior after detecting it. Instead, it continues collecting relevant evidence over time. Once enough proof is gathered, the R e p Veh generates a single, detailed MR to the RS. The protocol assumes that the RS is capable of prioritizing the quality and significance of the MR’s content, rather than just counting how many MRs it receives. This method increases reporting efficiency and decreases redundant communications.

4.4.2. Process 2: Online Reporting (OR)

To send an MR to the RS, R e p Veh may use different communication channels for reporting T a r Veh . In line with TS 103 759-V2.1.1 standards [16], the following communication protocols are considered for establishing the connection between V2I:
  • DSRC short via RSU or a cellular network link (3G, 4G, or 5G).
  • A wireless or wired connection at an electric vehicle charging station.
  • A Wi-Fi hotspot that offers Internet access, such as in a parking lot or a private hotspot at home.
  • Running the Vehicle On-Board Diagnostic (OBD) port and a diagnostic system at the inspection workshop or service garage.
Upon connectivity, the following subphases have to be performed to complete the precess of the OR.
  • RS Communication: Vehicles establish a secure connection with the RS either via RSUs or directly. In order to achieve such a communication within the RS during the MR submission process, the ISO 15118 standard requires Transport Layer Security (TLS) function for secure communication between vehicle and infrastructure as mandatory. With TLS handshake, vehicles and RS are securely authenticated, and cryptographic algorithms (cipher suite parameters) are defined. These steps are utilized to generate the TLS master key to encrypt and decrypt the communication messages between vehicles and RS in order to achieve the secure communication (authentication, integrity, and confidentiality).
  • MR Reporting: Once the communication is verified, R e p Vehs submits the stored MRs. This phase ensures that local evaluations feed into the global trust framework by enabling the RS to update the reputation and notify the rest of the network about the detected misbehaving vehicles. Base on the offline observations discussed in Table 3, we consider four reporting conditions, as outlined in Table 4.
    We analyze the main containers of the MR format, as previously explained and illustrated in Figure 5. The MR is composed of three containers:
    (a)
    Header: Includes the fundamental data that an MR should have, such as MR generation time, R e p Veh id, T a r Veh id, and MR type.
    (b)
    Source: Includes the misbehavior results where the R e p Veh flags the T a r Veh if the received M s g shows implausibilities.
    (c)
    Evidence: Contains the T a r Veh M s g and the R e p Veh M s g or any M s g from the neighboring R e p Vehs if believed helpful. The evidence could also include other supported information like a Local Dynamic Map (LDM), or direct sensor data from the R e p Veh OBUs. The detailed evidence for misbehavior vehicles that required by the MA is further explained in [13].
    Having outlined the main containers of the MR format, we now turn our attention to the expected three versions of the received MR by the RS based on this format:
    • Base MR: This basic version includes only the header and the source containers without any evidence.
    • Beacon MR: In this version, the R e p Veh includes a base MR and the suspicious T a r Veh M s g as evidence.
    • Evidence MR: This version contains a detailed report with more complete misbehavior information depending on the type of plausibility checks failure. For example, if the T a r Veh failed the speed consistency, the R e p Veh includes all related inconsistent M s g in the “Evidence Container” for deeper investigation.
    These MR versions support an efficient reporting process by allowing the RS to aggregate evidence and send a single comprehensive MR to the MA, reducing network overhead and ensuring accurate incident tracking.
  • RS Process: The RS in our system is a centralized entity that collects MRs from reporters, decides on the suitable reaction to make, and evaluates their credibility to determine the reputation scores and forwards verified MRs to the MA for further action; Figure 10 illustrates the OR process.
    We defined three main functions of the RS:
    • MR Grouping and Structure: The RS collects all the MRs and then adds them to its database. This step would enable access to MRs using specific criteria. For instance, the RS can get all the MRs associated with a certain pseudonym or all the MRs from a specific area. Those requests could be helpful in the analysis phase. Additionally, the RS filtering system aggregates similar MRs, such as those showing speed inconsistencies.
    • MRs Analysis and RS Actions: The RS analyzes the MRs to output the correct reaction. Correct MRs mostly align with a R e p Vehs past behavior and match the majority consensus, while M R f often deviates from or contradicts these patterns. Since setting the event ground truth in offline communication is challenging without the infrastructure to confirm the event. The RS establishes the event baseline using the Density Based Spatial Clustering of Applications with Noise (DBSCAN) in its initial classification, which is based on dense regions of consistent MRs with confidence determined by the R e p Vehs RV to ensure reliability. Once the ground is established and the DBSCAN clusters are formed, the RS applies the iForest to detect any anomalies within the DBSCAN outputs further. This step ensures the flagging of the potential M R f . Furthermore, RS implements the GMM predictions for the accurate classification of honest, malicious, and erroneous reporters to refine the categorization of MRs in order to enhance the classification accuracy. The technical aspects of these steps are discussed in Section 5. Table 5 outlines the classification of how incoming MRs are evaluated for potential misbehavior R e p Vehs based on various inconsistencies.
    • RS Decision: We propose a simple threshold method for the RS to trigger reaction levels based on a flexible MR count. While an accurate misbehavior reaction is still a debated subject in the ITS, we propose three levels of RS reaction for both the R e p Veh and T a r Veh as shown in Table 6.
  • Blacklist and Penalty Enforcement: Potentially misbehaving vehicles ( R e p Veh and T a r Veh ) will be reported to the MA. If the RV drops below the pre-set threshold, the RS develops a detailed MR and sends it to the MA within the SCMS. The MA decides on the seriousness level of the low RV and utilizes misbehavior detection algorithms to assess the situation and notify all participants regarding the detected misbehavior in two stages of reaction:
    (a)
    Passive revocation: The T a r Veh is blocked from requesting new certificates and R e p Veh is temporarily suspended from reporting privileges.
    (b)
    Active revocation: The T a r Veh ’s certificates will be revoked, and it is then sent to the CRL. The R e p Veh will be permanently banned from submitting MRs.

4.4.3. DRAMBR Privacy

Integrating our reputation system with the SCMS ensures privacy by employing PCs that hide actual identities. However, the question of when and how a PC change occurs remains unresolved. Numerous techniques have been proposed by scientific investigations to ascertain the location and pace of change in PCs [45]. Mechanisms that focus on node-centric misbehavior detection (MBD) require a consistent identity to effectively monitor and evaluate the behavior of the T a r Veh . However, privacy-preserving strategies based on PCs introduce identity changes, complicating accurate tracking. Therefore, considering an appropriate PC change strategy, as outlined below, becomes essential to balancing privacy and accountability within our system.
  • Random: The PC has a predefined possibility of changing with each outgoing message.
  • Disposable: The PC is used for a predetermined number of messages, such as beacons and warnings.
  • Periodical: After a predetermined amount of time, the vehicle changes its PCs.
  • Distance: After a predetermined number of kilometers, the vehicle changes its PCs.

5. DRAMBR Technical Implementation

This section outlines the adopted computational processes across the two stages of DRAMBR to establish a foundation for the experiment design and evaluation. Table 7 defines the notations used throughout the DRAMBR system.

5.1. DRAMBR: Offline Evaluation

We assume that the system consists of N vehicles { V 1 , V 2 , , V N } , where each vehicle communicates with others within the network. Let V j be the T a r Veh observed by the R e p Veh V i . V i evaluates multiple messages, each containing an Action-ID, which uniquely identifies actions (e.g., initiate, update, cancel) as per ETSI EN 302 637-3 V1.2.1 standards [17]. The system associates each incident with an Incident-ID ( I inc ) that aligns with the Action-ID to ensure consistent tracking and grouping of related messages.
Let R V V i represent the RV of vehicle i, where i { 1 , 2 , , N } . The R V V i is a continuous variable defined in the range [0, 1]. Thus, R V V i ( t ) = 1 implies that vehicle V i is fully trusted at a specific time t, and R V V i ( t ) = 0 indicates complete distrust. We set the RV threshold as τ R e p . Each vehicle V i makes an acceptance decision based on the RV:
Msg Acceptance Decision = Accept M s g from V j , if R V V j ( t ) τ R e p Ignore M s g from V j , if R V V j ( t ) < τ R e p
After M s g acceptance, if V i identifies V j as a misbehaving vehicle, the process of generating and encrypting a misbehavior report MR is as follows:
  • Vehicle V i generates a misbehavior report M R regarding V j .
  • The M R is symmetrically encrypted using a session key K s , producing the following ciphertext:
    C = Enc K s ( M R )
  • The session key K s is encrypted with the RS’s public key K pub S , resulting in
    K s * = Enc K pub R S ( K s )
  • The ciphertext C is signed by V i using its private key K priv i , yielding the following digital signature:
    σ A = Sign K priv i ( C )

5.2. DRAMBR: Online Reporting

The reporting process consists of secure transmission of MRs from R e p Veh to RS. RS verifies the MRs, checks correctness, and updates RVs for both R e p Veh and T a r Veh while keeping privacy by PCs. Figure 11 illustrates the multi-stage process adopted in our system by incorporating different classification techniques to have an efficient and effective MR evaluation.
  • MR Decryption and Verification: The RS performs the following operations after receiving the MR:
    (a)
    First, the RS decrypts K s * using its private key K priv S , retrieving the session key K s :
    K s = Dec K priv S ( K s * )
    (b)
    To retrieve the MR, the RS decrypts the ciphertext C
    M R = Dec K s ( C )
    (c)
    Using the public key K pub i from PC, the RS verifies the signature σ A , ensuring the integrity of the report:
    Verify K pub i ( C , σ i )
  • MR Preprocessing and Acceptance Criteria: In this step, the MRs are assessed by filtering out the invalid certificates and low-RV R e p Vehs to ensure that only valid, reliable, and unique MRs are considered in further analysis.
    (a)
    The RS checks V i pseudonym certificate p c :
    The MR from V i is accepted p c = 1
    p c = 1 if V i has a valid certificate 0 if V i has an invalid certificate
    (b)
    The RS checks V i RV:
    The MR from V i is accepted R V τ R e p
  • MRs Grouping Using DBSCAN
    This step identifies clusters of consistent MRs that likely reflect genuine events. We use the clustering algorithm DBSCAN, which identifies clusters in a dataset based on the density of data points. DBSCAN does not require a predefined number of clusters, which makes it particularly useful in scenarios with unknown cluster sizes or irregularly shaped clusters [34].
    In our scenario, the RS follows this step to set the ground truth of what occurred in offline communications without relying on real-time infrastructure support. To illustrate, based on two parameters, DBSCAN groups data points into clusters:
    • Epsilon ( ϵ ): The maximum distance between two points for them to be considered neighbors.
    • Minimum Points (minPts): The minimum number of points required to form a dense region (a cluster).
Figure 12 illustrates the DBSCAN idea as follows:
Figure 12. Illustration of DBSCAN. Source: [46].
Figure 12. Illustration of DBSCAN. Source: [46].
Futureinternet 17 00174 g012
    • Core Points (red points): These have at least minPts neighbors within ϵ -distance.
    • Border Points (yellow points): These are within ϵ -distance of a core point but have fewer than minPts neighbors.
    • Noise Points (blue points): These do not meet the density criteria and are treated as outliers.
    We adopted the DBSCAN process in our system to validate all the MRs generated in the offline communication. Hence, the RS can establish the ground truth regarding all the events, and the following steps are applied:
    (a)
    Incident ID: First, the MRs will be grouped based on their Incident ID ( I inc ), which ensures that only MRs that are relevant to the same event are processed together:
    R I inc = { M R i | I i = I inc }
    Each group R I inc corresponds to a unique incident, isolating the observations related to that incident.
    (b)
    DBSCAN Inputs: Within each group R I inc , DBSCAN is applied to identify consistent clusters of reports.
    • Feature set: x i = [ L i , T i ] , where:
      -
      L i : Location of the R e p Veh .
      -
      T i : Timestamp of the MR.
    • Parameters:
      -
      ϵ L : Spatial threshold for proximity.
      -
      ϵ T : Temporal threshold for proximity.
      -
      minPts: Minimum number of reports required to form a cluster.
    Reports M R i and M R j within R I inc are considered part of the same cluster G I inc , k if:
    L i L j ϵ L and | T i T j | ϵ T
    Reports that do not meet these criteria or do not belong to any cluster are treated as noise ( N I inc ).
    Outputs:
    • Clusters ( G I inc , k ): Groups of MRs consistent in location and time, representing reliable observations.
    • Noise( N I inc ): MRs flagged as outliers ( M R f ), which may indicate malicious or erroneous behavior.
    (c)
    Determining Ground Truth: For each cluster G I inc , k , the following steps are performed:
    1. Majority Voting for Reported Status ( S I inc , k ):
    The dominant reported status S I inc , k (e.g., “accident” or “no accident”) is determined using:
    S I inc , k = arg max s Count of S i = s in G I inc , k
    2. Confidence Aggregation for Ground Truth:
    In this step, the system calculates the cluster’s confidence based on the Reporter’s RV. The confidence of the majority-reported status S I inc , k within a cluster G I inc , k is calculated as follows to assess the reliability of the cluster:
    Confidence ( S I inc , k ) = M R i G I inc , k and S i = S I inc , k C i M R i G I inc , k C i
    where the following:
    • The numerator aggregates the confidence scores C i of reports within the cluster that align with the majority-reported status S I inc , k .
    • The denominator aggregates the total confidence scores of all reports in the cluster.
    3. Final Decision:
    If Confidence ( S I inc , k ) Threshold , S I inc , k is accepted as the ground truth for incident I inc .
    This step ensures that only verified and consistent observations are used to analyze R e p Vehs behavior and update their RVs.
4.
Outlier Detection Using Isolation Forest for Anomaly Detection.
After DBSCAN groups reports into clusters and flags noise points, Isolation Forest (iForest) refines this process by detecting anomalies (e.g., malicious or erroneous reports) within each cluster G I inc , k [35,36,47].
This step works on the output of DBSCAN clusters and focuses on deeper, feature-based anomaly detection. iForest distinguishes between mild anomalies and extreme outliers. The input is each G I inc , k cluster from DBSCAN. The iForest calculates the anomaly score s ( x , n ) for each report M R i .
Path Length: Define h ( x ) , the number of splits required to isolate point x.
Anomaly Score for M R i :
s ( x , n ) = 2 h ( x ) c ( n ) , c ( n ) = 2 H ( n 1 ) 2 ( n 1 ) n ,
where the following are used:
  • H ( x ) is the harmonic number measures the number of steps or splits to isolate x;
  • n is the sample size;
  • c ( n ) normalizes the path length.
Outlier Classification:
s ( x , n ) > τ i f
MRs are classified as outliers (potential M R f ), where τ i f is the anomaly score threshold that determines whether an M R f is classified as an outlier or an inlier.
Output: Classification of M R f as outliers (anomalies) or inliers (consistent).
5.
Gaussian Mixture Model (GMM) for MR Classification
Model the MR features x = [ R V , δ L , δ T ] using a Gaussian Mixture [37]:
p ( x ) = k = 1 K π k N ( x μ k , Σ k ) ,
where the following are used:
  • π k is the mixing weight (sum to 1);
  • μ k is the mean vector;
  • Σ k is the covariance matrix.
The posterior probability that x belongs to component k is given by
γ k ( x ) = π k N ( x μ k , Σ k ) j = 1 K π j N ( x μ j , Σ j ) .
Clustering Decision
  • k = 1 : Honest R e p Vehs .
  • k = 2 : Malicious R e p Vehs .
  • k = 3 : Erroneous R e p Vehs .
Output: Labels for R e p Vehs : Honest, Malicious, Erroneous.
6.
Ensemble Random Forest and XGBoost
We use the Random Forest and XGBoost [38,39] to refine the classification of MRs. These steps process the features generated during the earlier stages to provide a binary classification of MRs.
Previous Steps
  • DBSCAN: Set the ground truth for the actual event and clusters of MRs based on spatial and temporal proximity.
  • Isolation Forest (iForest): iForest isolates potential anomalies within the identified clusters and flags suspicious reports.
  • GMM: Models the distribution probability of misbehavior, separating normal and abnormal patterns.
These steps then provide the input feature set for this step, defined as
x = [ R V , δ L , δ T , s ( x , n ) ] ,
where: s ( x , n ) : Relationships between the data points (e.g., report similarity).
Classification Models
  • Random Forest: Random Forest aggregates predictions from multiple decision trees. Each tree T i independently classifies the data, and the final decision is made via a majority vote:
    f RF ( x ) = Majority Vote ( T 1 ( x ) , T 2 ( x ) , , T m ( x ) ) .
  • XGBoost: XGBoost minimizes a custom loss function:
    L = i l ( y i , y ^ i ) + Ω ( f ) ,
    where the following:
    -
    l ( y i , y ^ i ) : Loss function (e.g., log loss) comparing predictions y ^ i with actual labels y i ;
    -
    Ω ( f ) : Regularization term penalizing overly complex models, improving generalization.
  • Combined Prediction: Predictions from both models are combined using a weighted average:
    f final ( x ) = w RF f RF ( x ) + w XGB f XGB ( x ) ,
    where w RF and w XGB are the respective weights for Random Forest and XGBoost.
Output: In this step, the final decision for each MR will be one of the following:
  • Honest: The MR is correct, reflecting an honest behavior of the R e p Veh and true T a r Veh misbehavior.
  • Malicious: The MR is false, generated and submitted intentionally, reflecting malicious behavior of the R e p Veh and false T a r Veh misbehavior.
  • Erroneous: The M R f is wrong because of mistakes in the R e p Veh system.
7.
RV Update
Weighted Increment/Decrement Based on Decision.
For each R e p v e h ( i ) , update the R V i :
R V V i new = R V V i + Δ pos if Honest R e p Veh ( True Positive ) , R V V i Δ neg if Malicious R e p Veh , R V V i if Erroneous Report .
where Δ pos and Δ neg are predefined step sizes.

6. DRAMBR Experimental Evaluation

In this section, we discuss the simulation scenario, implementing the proposed DRAMBR system to measure its accuracy in the misbehavior reporting process. We first introduce the experiment description, highlighting the core concept. Then, an outline of the simulation environment is provided, setting the stage for the experiment setup.

6.1. Experiment Description

In our experiment, the main focus is on analyzing the MRs generated by R e p Vehs during communications in offline settings. We designed different scenarios, as explained in Table 8, to serve as trigger events for reporting the misbehavior generated by the T a r Vehs . These scenarios set the stage for the evolution process and enable us to identify further misbehavior activities in disconnected areas during emergencies.
Three scenarios are considered to simulate realistic events that could lead to the generation of MRs:
  • Accident Occurs (ACD-OCC);
  • Accident Absent (ACD-ABS);
  • Accident Resolved (ACD-RSL).
The table below summarizes the R e p Veh behaviors in each scenario and the types of MRs they trigger:

6.2. Simulation Setup

This section provides the detailed simulation setup for the three scenarios, including the configurations, parameters, and communication dynamics used to analyze the generated MRs. To simulate realistic vehicular movements, we start our experiment by importing a map and integrating it into the SUMO environment [48,49]. We used OpenStreetMap to import a map from the rural area of Peak District [50,51], as shown in Figure 13. In this step, we implemented the three scenarios to see how vehicles evaluate each other’s behaviors and generate MRs based on observed misbehavior without connectivity.
Every vehicle has the DSRC/WAVE (IEEE 802.11p) protocol, a common wireless communication interface. SUMO simulation generates floating car data (FCD) output, which captures vehicle-specific metrics such as position, speed, angle, and other details at each timestep. This raw data are collected and saved in an FCD file, which serves as the primary dataset for analyzing vehicular behaviors and identifying potential misbehavior events. The FCD file is subsequently processed to extract relevant information, and the data are stored in a CSV file. This CSV file initially contains information about misbehavior reports, including attributes such as the reporter ID, timestamp, and location. The simulation parameters in this experiment are given in Table 9.
In our simulation, the RV is set to 0.5 to align with established reputation standards, which balance the trade-off between maintaining system reliability and minimizing false positive rates [52]. In vehicular networks, vehicles with trust values higher than 0.5 may be considered trustworthy when forwarding data, while lower values indicate distrust. Similarly, temporal network studies have employed a 0.5 credibility threshold to evaluate trustworthiness effectively [53]. Although 0.5 is a standard choice, its value may be adjusted based on system requirements or empirical data [54].
The resulting MRs are organized into three files, each file is associated with a specific event: (ACD-OCC), (ACD-ABS), and (ACD-RSL). We filter the data in each file to remove redundancy, ensuring that only relevant and unique MRs are retained. Subsequently, we proceed with processing the MRs for further analysis by implementing the DRAMBR system following the steps illustrated in Table 10.

7. Results and Discussion

Based on the experiments described above, this section evaluates the performance and accuracy of the proposed DRAMBR in identifying false MRs under various scenarios.
First, we present the results obtained from applying DBSCAN. The outputs from this stage are essential for the subsequent stage, where we implement the iForest methodology to evaluate the DBSCAN findings. Following this, we analyze the aggregated results using GMM to finalize the reporter’s behaviors. After that, we implement the combined Random Forest and XGBoost to estimate the effectiveness and the accuracy of the results before moving to the final stage of updating the reputation.
Our findings provide insights into the system’s ability to differentiate between honest and malicious behaviors, showcasing the impact of reputation-based decision-making compared with the existing certificate-based system.

7.1. Results from Applying DBSCAN

DBSCAN serves as the main step when the RS receives a large number of MRs regarding misbehavior in offline scenarios. Its role is to analyze these MRs and group them into clusters that represent consistent patterns.
Figure 14 shows the DBSCAN results of clustering the received 100 MRs in each scenario. In A C D - O C C , multiple distinct clusters (blue and green) represent consistent groups of MRs agreeing on the accident’s occurrence. Noise points (yellow) are conflicting MRs that cannot fit into any cluster.
In A C D - A B S and A C D - R S L , clustering is less distinctive, with a higher number of noise points and smaller cluster sizes, indicating higher variability or disagreement among the reporters. This then underscores the difficulties inherent in establishing the truth behind scenarios when reporting is more inconsistent. Below is a detailed explanation of each scenario:
  • DBSCAN A C D - O C C
    In A C D - O C C (Top Figure), DBSCAN identified five clusters; the largest cluster, Cluster 0, contained 65 MRs. In total, it flagged 17 MRs as noise out of 100 MRs, which accounts for 17% of all the MRs. This represents moderate consistency among the R e p Vehs since 83% of the MRs fell into meaningful clusters.
  • DBSCAN A C D - A B S
    In the second scenario A C D - A B S (Middle Figure), DBSCAN grouped the MRS into four clusters, with Cluster 0 dominating by including 70 MRs. However, this scenario also shows a high level of noise, with 20 MRs flagged as outliers, accounting for 20% of the total.
  • DBSCAN A C D - R S L
    In the case of the final scenario, A C D - R S L (Bottom Figure), DBSCAN identified three clusters, with Cluster 0 containing 80 reports. Only 8 MRs were classified as noise out of 100 MRs, which means 8% of the total. The scenario shows the highest reliability in reporting, as 92% of the received MRs fall well within the identified clusters, showing strong agreement among the R e p Vehs .
Generally, DBSCAN effectively segregates consistent MRs into clusters while identifying noise points for further refinement using Isolation Forest to distinguish between serious anomalies and erroneous reports.

7.2. Results from Applying Isolation Forest

Figure 15 displays the outcomes of the iForest applied specifically to the noise points identified in the DBSCAN step across three scenarios: A C D - O C C with 17 points, A C D - A B S with 20 points, and A C D - R S L with 8 points. Each subfigure illustrates the latitude and longitude of the reports, overlaid with the iForest’s classification results, represented by the outlier scores (color gradient). Blow is a detailed explanation of each scenario:
  • iForest A C D - O C C
    For the first scenario, A C D - O C C (Top Figure), the iForest refined the resulting 17 noise points from DBSCAN by classifying a subset as outliers (blue,) representing 7 MRs, and the rest as inliers (red) with 10 MRs. The outlier percentage in this step is 41.18%, which demonstrates how the iForest narrows down potential serious anomalies within the initially noisy MRs, helping identify MRs that significantly deviate from the expected behavior.
  • iForest A C D - A B S
    For the second scenario, A C D - A B S (Middle Figure), the iForest refined the 20 noise points from DBSCAN by classifying a subset as consistent inliers (red) with 12 MRs, and some MRs that were classified as anomalies (blue) (8 MRs). The outlier percentage in this step is 40%.
  • iForest A C D - R S L
    The third scenario, A C D - R S L (Bottom Figure), exhibits a moderate balance of inliers (red) equal to 5 MRs and outliers (blue) equal to 3 MRs, which represents a percentage of 37.5% of outliers. The iForest aids in detecting misleading or erroneous MRs in a scenario where there might be conflicting observations about the resolution of an event.
The iForest results illustrate the efficacy of further examining noise points to distinguish serious anomalies from less critical deviations. This layered approach enhances the RS reliability of misbehavior detection and decision-making regarding the received MRs.

7.3. Gaussian Mixture Model (GMM)

To further distinguish between honest, malicious, and system error R e p Vehs , the following section discusses the results generated from using the Gaussian Mixture Model (GMM) that focuses on classifying and identifying the R e p Vehs .
Figure 16 shows the classification results generated from the Gaussian Mixture Model (GMM) to correctly identify the R e p Vehs types in all three scenarios: A C D - O C C , A C D - A B S , and A C D - R S L . The results categorize R e p Vehs into three types: Honest, Malicious, and Erroneous R e p Vehs .
  • Honest R e p Vehs : In all three scenarios, honest R e p Vehs Represented by the (Blue Bars) show The majority of the cases, with approximately 80 R e p Vehs for each scenario which indicates a consistent pattern of correct reporting across scenarios.
  • Malicious R e p Vehs : As shown by the orange bars, a small proportion of R e p Vehs are identified as malicious. These R e p Vehs intentionally submit M R f with a consistent number across scenarios, ranging from approximately 6 to 10.
  • Erroneous R e p Vehs : This is the smallest group, represented by the green bars. These R e p Vehs submitted incorrect reports M R f , resulted from an unintentional error and were not indicative of deliberate misbehavior. The false result in these MRs is due to sensor faults or environmental errors. The number of R e p Vehs with erroneous data differs slightly, with a minimal count observed in each scenario.
In this classification, we highlight our proposed system’s efficiency in making a difference between honest, malicious, and erroneous behavior, helping the RS maintain accuracy and filter out potentially disruptive or inaccurate reports.

7.4. DRAMBR Accuracy

In this ensemble classification, we evaluate DRAMBR’s accuracy across the three scenarios and the overall system performance. We combine the Random Forest and XGBoost models to analyze the accuracy in two ways:
  • Noise-Based Accuracy: Focused only on noise points refined by DBSCAN and Isolation Forest. This classification results in 72%.
  • Full System Accuracy: Includes all MRs (honest and noise) across all scenarios. This evaluation results in 98%.
Figure 17 illustrates the noise-based classification with 72% accuracy from Random Forest and XGBoost, highlighting the performance of the DBSCAN, iForest, and GMM models applied to the noise points.
In Figure 18, we show the Random Forest and XGBoost results in the three scenarios, achieving a total accuracy of 98%, which measures the effectiveness of the entire system.
The total system accuracy, as shown in Table 11, reflects the system’s overall ability to classify MRs and R e p Vehs correctly across the three scenarios, including 1—consistent MRs from DBSCAN (True Positives); 2—refined classifications of noise points through iForest and GMM.
The results in Figure 19 compare accuracy, precision, recall, and F1-score in each scenario and the overall system, highlighting the system’s robustness in distinguishing between honest, malicious, and erroneous reporters across all scenarios.
The (ACD-OCC) scenario demonstrates the system’s ability to achieve perfect classification. The other scenarios (ACD-ABS and ACD-RSL) show consistently high accuracy of approximately 97%, even under varying conditions. The overall accuracy of around 98% indicates that the DRAMBR system correctly classified (approximately) most of the MRs across all scenarios.
It is worth noting that as the system benefits from larger datasets to make more reliable decisions, the system accuracy improves with an increasing scale of reporters, Through our findings, we show how the system effectively distinguishes between honest, malicious, and erroneous reporters. This approach ensures accurate reputation updating for both reporters and targets, enhancing trust and accountability in disconnected vehicular networks, as it ensures reliable decision-making even under constraints.

8. Conclusions

While V2V networks have the potential to improve driving safety, misbehaving vehicles aim to disrupt their communication reliability, particularly in disconnected rural areas with limited infrastructure. Existing systems like SCMS depend on concepts related to MAs and CRLs, which are inadequate in such scenarios. This paper has presented a novel scheme for accurately detecting and classifying misbehavior in V2V networks.
DRAMBR identifies and mitigates misbehavior by using LMDM, which uses local observations and neighboring feedback in offline settings. These observations are consolidated upon connectivity as reports to be submitted to a centralized RS. The remainder of DRAMBR concerns storing, reporting, aggregating, and integrating these reports using existing classification techniques. DRAMBR classification is a multi-stage process, starting with DBSCAN to classify whether multiple reports concern the same event, the iForest to analyze the anomaly, and then Gaussian Mixture Models for probabilistic classification of malicious versus honest behavior. Additionally, Random Forest and XGBoost models are combined to improve decision accuracy.
Our findings demonstrate the effectiveness of the DRAMBR in reducing false reporting which improves decision accuracy, ensures reliable detection of misbehavior, and supports the RS’s ability to maintain system integrity. Adopting the proposed system results in enhanced V2V communication reliability, and ensuring a safer network in infrastructure-limited environments.
While this demonstrates a reasonable level of reliability in distinguishing between honest, malicious, and erroneous reporters, there remains room for improvement. For example, when up to half of the vehicles are malicious or traffic conditions are highly dynamic, DRAMBR’s robustness will be affected. Future work will focus on enhancing feature engineering, addressing any class imbalances, and fine-tuning model hyperparameters to further increase the classification performance and ensure more precise identification of reporter types.

Author Contributions

Conceptualization, D.A., T.M. and S.F.; Methodology, D.A., T.M. and S.F.; Formal analysis, D.A.; Investigation, D.A.; Data curation, D.A.; Writing—original draft, D.A.; Writing—review & editing, T.M. and S.F.; Visualization, D.A.; Supervision, T.M. and S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data can be shared upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this paper:
CACertificate Authority
CRLCertification Revocation List
DBSCANDensity Based Spatial Clustering of Applications with Noise
DSRCDedicated Short-Range Communication
ETSIEuropean Telecommunications Standards Institute
ITSIntelligent Transport Systems
LALinkage Authorities
LMDMLocal Misbehavior Detection Mechanism
MAMisbehavior Authority
MRMisbehavior Report
OBUsOn-Board Units
OEOffline Evaluation
OROnline Reporting
PCAPseudonym Certificate Authority
PCsPseudonym Certificates
RARegistration Authority
RSReputation Server
RSUsRoadside Units
RVReputation Value
SCMSSecurity Credential Management System
SUMOSimulation of Urban MObility
VANETsVehicular Ad hoc Networks
V2IVehicle-to-Infrastructure
V2VVehicle-to-Vehicle

References

  1. DOT U. Department of Transportation. Interim Final Rule [IFR], Enhancing Rail Transportation Safety and Security for Hazardous Materials; U.S. Department of Transportation: Washington, DC, USA, 2020; p. 73. [Google Scholar]
  2. Almani, D.; Muller, T.; Carpent, X.; Yoshizawa, T.; Furnell, S. Enabling vehicle-to-vehicle trust in rural areas: An evaluation of a pre-signature scheme for infrastructure-limited environments. Future Internet 2024, 16, 77. [Google Scholar] [CrossRef]
  3. Brecht, B.; Therriault, D.; Weimerskirch, A.; Whyte, W.; Kumar, V.; Hehn, T.; Goudy, R. A Security Credential Management System for V2X Communications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3850–3871. [Google Scholar] [CrossRef]
  4. Wang, Q.; Gao, D.; Chen, D. Certificate Revocation Schemes in Vehicular Networks: A Survey. IEEE Access 2020, 8, 26223–26234. [Google Scholar] [CrossRef]
  5. Zhang, M.; Wolff, R.S. A border node based routing protocol for partially connected vehicular ad hoc networks. J. Commun. 2010, 5, 130–143. [Google Scholar] [CrossRef]
  6. Barua, M.; Liang, X.; Lu, R.; Shen, X. RCare: Extending secure health care to rural area using VANETs. Mob. Netw. Appl. 2014, 19, 318–330. [Google Scholar] [CrossRef]
  7. Agrawal, S.; Tyagi, N.; Misra, A.K. Seamless VANET connectivity through heterogeneous wireless network on rural highways. In Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies, Udaipur, India, 4–5 March 2016; pp. 1–5. [Google Scholar]
  8. Mistareehi, H.; Islam, T.; Manivannan, D. A secure and distributed architecture for vehicular cloud. Internet Things 2021, 13, 100355. [Google Scholar] [CrossRef]
  9. Zhang, L.; Wang, L.; Zhang, L.; Zhang, X.; Sun, D. An RSU deployment scheme for vehicle-infrastructure cooperated autonomous driving. Sustainability 2023, 15, 3847. [Google Scholar] [CrossRef]
  10. INTEGRITY Security Services. SCMS V2X and SCMS C-ITS Certificate Management. Available online: https://www.ghsiss.com/vehicle-to-everything-communications/ (accessed on 10 April 2025).
  11. IEEE Xplore. V2X Credential Management System Comparison Based on IEEE and ETSI Standards. Available online: https://ieeexplore.ieee.org/document/10774595 (accessed on 10 April 2025).
  12. Hasrouny, H.; Samhat, A.E.; Bassil, C.; Laouiti, A. misbehaviour detection and efficient revocation within VANET. J. Inf. Secur. Appl. 2019, 46, 193–209. [Google Scholar]
  13. Kamel, J.; Ansari, M.R.; Petit, J.; Kaiser, A.; Jemaa, I.B.; Urien, P. Simulation Framework for misbehaviour Detection in Vehicular Networks. IEEE Trans. Veh. Technol. 2020, 69, 6631–6643. [Google Scholar] [CrossRef]
  14. Singh, D.; Maurya, A.K.; Ranvijay; Yadav, R.S. CRLMDA: CRL minimisation and distribution algorithm in cluster-based VANETs. Int. J. Commun. Netw. Distrib. Syst. 2023, 29, 239–267. [Google Scholar] [CrossRef]
  15. Ning, P.; Cui, Y.; Reeves, D.S. Constructing attack scenarios through correlation of intrusion alerts. In Proceedings of the 9th ACM Conference on Computer and Communications Security, Washington, DC, USA, 18–22 November 2002; pp. 245–254. [Google Scholar]
  16. TS 103 759-V2.1.1; Intelligent Transport Systems (ITS). European Telecommunications Standards Institute: Sophia Antipolis, France, 2021.
  17. IEEE Std 1609.2TM-2022; IEEE Standard for Wireless Access in Vehicular Environments—Security Services for Applications and Management Messages. IEEE Standards Association: Piscataway, NJ, USA, 2022.
  18. Li, Q.; Malip, A.; Martin, K.; Ng, S.; Zhang, J. A reputation-based announcement scheme for VANETs. IEEE Trans. Veh. Technol. 2012, 61, 4095–4108. [Google Scholar]
  19. Cui, J.; Zhang, X.; Zhong, H.; Zhang, J.; Liu, L. Extensible conditional privacy protection authentication scheme for secure vehicular networks in a multi-cloud environment. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1654–1667. [Google Scholar] [CrossRef]
  20. Khan, S.; Zhu, L.; Yu, X.; Zhang, Z.; Rahim, M.A.; Khan, M.; Du, X.; Guizani, M. Accountable credential management system for vehicular communication. Veh. Commun. 2020, 25, 100279. [Google Scholar] [CrossRef]
  21. El Sayed, H.; Zeadally, S.; Puthal, D. Design and evaluation of a novel hierarchical trust assessment approach for vehicular networks. Veh. Commun. 2020, 24, 100227. [Google Scholar] [CrossRef]
  22. Kudva, S.; Badsha, S.; Sengupta, S.; Khalil, I.; Zomaya, A. Towards secure and practical consensus for blockchain-based VANET. Inf. Sci. 2021, 545, 170–187. [Google Scholar] [CrossRef]
  23. Cao, Z.; Li, Q.; Lim, H.W.; Zhang, J. A multi-hop reputation announcement scheme for VANETs. In Proceedings of the 2014 IEEE International Conference on Service Operations and Logistics, and Informatics, Qingdao, China, 8–10 October 2014; pp. 238–243. [Google Scholar]
  24. Katiyar, A.; Gupta, S.K.; Singh, D.; Yadav, R.S. A dynamic single-hop clustering algorithm (DSCA) in VANET. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–6. [Google Scholar]
  25. Jesudoss, A.; Raja, S.K.; Sulaiman, A. Stimulating truth-telling and cooperation among nodes in VANETs through payment and punishment scheme. Ad Hoc Netw. 2015, 24, 250–263. [Google Scholar] [CrossRef]
  26. Ahmed, W.; Di, W.; Mukathe, D. A blockchain-enabled incentive trust management with threshold ring signature scheme for traffic event validation in VANETs. Sensors 2022, 22, 6715. [Google Scholar] [CrossRef]
  27. Ke, C.; Xiao, F.; Cao, Y.; Huang, Z. A group-vehicles oriented reputation assessment scheme for edge VANETs. IEEE Trans. Cloud Comput. 2024, 12, 859–875. [Google Scholar] [CrossRef]
  28. Wang, J.; Zhang, Y.; Wang, Y.; Gu, X. RPRep: A robust and privacy-preserving reputation management scheme for pseudonym-enabled VANETs. Int. J. Distrib. Sens. Netw. 2016, 12, 6138251. [Google Scholar] [CrossRef]
  29. Vaiana, R.; Perri, G.; Iuele, T.; Gallelli, V. A comprehensive approach combining regulatory procedures and accident data analysis for road safety management based on the European Directive 2019/1936/EC. Safety 2021, 7, 6. [Google Scholar] [CrossRef]
  30. Kabbur, M.; Murthy, M.V. MVR delay: Cooperative light weight detection and prevention of false emergency message dissemination in VANET. In Proceedings of the International Conference on Cognitive Computing and Information Processing, Mysuru, India, 15–16 December 2023; Springer: Cham, Switzerland, 2023; pp. 25–38. [Google Scholar]
  31. Yang, N.; Tang, C.; Zong, T.; Zeng, Z.; Xiong, Z.; He, D. RIC-SDA: A reputation incentive committee-based secure conditional dual authentication scheme for VANETs. IEEE Trans. Mob. Comput. 2024, 23, 14361–14376. [Google Scholar] [CrossRef]
  32. Huynh, T.D.; Jennings, N.R.; Shadbolt, N.R. Certified reputation—How an agent can trust a stranger. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2006), Hakodate, Japan, 8–12 May 2006; pp. 1217–1224. [Google Scholar]
  33. Samara, G.; Alsalihy, W.A.H.A. A new security mechanism for vehicular communication networks. In Proceedings of the 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensic (CyberSec 2012), Kuala Lumpur, Malaysia, 26–28 June 2012. [Google Scholar]
  34. Hong, S.P. Enhancing Black Hole Attack Detection in VANETs: A Hybrid Approach Integrating DBSCAN Clustering with Decision Trees. Scalable Comput. Pract. Exp. 2024, 25, 4540–4557. [Google Scholar] [CrossRef]
  35. Ripan, R.C.; Sarker, I.H.; Anwar, M.M.; Furhad, M.H.; Rahat, F.; Hoque, M.M.; Sarfraz, M. An isolation forest learning based outlier detection approach for effectively classifying cyber anomalies. In Proceedings of the Hybrid Intelligent Systems: 20th International Conference on Hybrid Intelligent Systems (HIS 2020), Virtual Event, 14–16 December 2020; Springer International Publishing: Cham, Switzerland, 2021; pp. 270–279. [Google Scholar]
  36. Xu, H.; Pang, G.; Wang, Y.; Wang, Y. Deep isolation forest for anomaly detection. IEEE Trans. Knowl. Data Eng. 2023, 35, 12591–12604. [Google Scholar] [CrossRef]
  37. Wan, H.; Wang, H.; Scotney, B.; Liu, J. A novel Gaussian mixture model for classification. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6 October 2019; IEEE: New York, NY, USA, 2019; pp. 3298–3303. [Google Scholar]
  38. Ramraj, S.; Uzir, N.; Sunil, R.; Banerjee, S. Experimenting XGBoost algorithm for prediction and classification of different datasets. Int. J. Control Theory Appl. 2016, 9, 651–662. [Google Scholar]
  39. Joharestani, M.Z.; Cao, C.; Ni, X.; Bashir, B.; Talebiesfandarani, S. PM2.5 prediction based on random forest, XGBoost, and deep learning using multisource remote sensing data. Atmosphere 2019, 10, 373. [Google Scholar] [CrossRef]
  40. Banković, Z.; Vlajic, M.; Puerta, D.S.G.; Gonzalez, D.S.S.; Alcaraz, J. Detecting bad-mouthing attacks on reputation systems using self-organizing maps. In Proceedings of the Computational Intelligence in Security for Information Systems: 4th International Conference, CISIS 2011, Held at IWANN 2011, Torremolinos-Málaga, Spain, 8–10 June 2011; Proceedings. Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  41. Ansariyar, A.; Jeihani, M. Investigating LiDAR Sensor Accuracy for V2V and V2P Conflict Detection at Signalized Intersections. Future Transp. 2024, 4, 834–855. [Google Scholar] [CrossRef]
  42. Ashraf, M.T.; Dey, K. Conflict resolution behaviour of autonomous vehicles at intersections under mixed traffic environment. Accid. Anal. Prev. 2025, 211, 107897. [Google Scholar] [CrossRef]
  43. Huang, Y.; Wang, Y.; Yan, X.; Li, X.; Duan, K.; Xue, Q. Using a V2V- and V2I-based collision warning system to improve vehicle interaction at unsignalized intersections. J. Saf. Res. 2022, 83, 282–293. [Google Scholar] [CrossRef]
  44. Lv, P.; Xie, L.; Xu, J.; Wu, X.; Li, T. Misbehaviour detection in vehicular ad hoc networks based on privacy-preserving federated learning and blockchain. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3936–3948. [Google Scholar] [CrossRef]
  45. Babaghayou, M.; Labraoui, N.; Ari, A.A.; Lagraa, N.; Ferrag, M.A. Pseudonym change-based privacy-preserving schemes in vehicular ad-hoc networks: A survey. J. Inf. Secur. Appl. 2020, 55, 102618. [Google Scholar] [CrossRef]
  46. Wikipedia. DBSCAN—Density-Based Spatial Clustering of Applications with Noise. Available online: https://en.wikipedia.org/wiki/DBSCAN#:~:text=DBSCAN%20is%20one%20of%20the,data%20mining%20conference%2C%20ACM%20SIGKDD (accessed on 20 December 2024).
  47. Vinita, L.J.; Vetriselvi, V. Federated Learning-based Misbehaviour detection on an emergency message dissemination scenario for the 6G-enabled Internet of Vehicles. Ad Hoc Networks 2023, 144, 103153. [Google Scholar] [CrossRef]
  48. SUMO. Simulation of Urban Mobility. Available online: https://eclipse.dev/sumo/ (accessed on 2 April 2017).
  49. Behrisch, M.; Bieker, L.; Erdmann, J.; Krajzewicz, D. SUMO—Simulation of urban mobility: An overview. In Proceedings of the 3rd Int. Conf. Adv. Syst. Simul., Barcelona, Spain, 23–28 October 2011; pp. 55–60. [Google Scholar]
  50. OpenStreetMap [Online]. Available online: https://www.openstreetmap.org (accessed on 16 October 2017).
  51. Haklay, M.; Weber, P. OpenStreetMap: User-generated street maps. IEEE Pervas. Comput. 2008, 7, 12–18. [Google Scholar] [CrossRef]
  52. Eeti, J.; Anurag, S. Trust- and reputation-based opinion dynamics modelling over temporal networks. J. Complex Netw. 2022, 10, cnac019. [Google Scholar] [CrossRef]
  53. Lui, G.L. Threshold detection performance of GMSK signal with BT = 0.5. In Proceedings of the IEEE Military Communications Conference, Proceedings, MILCOM 98 (Cat. No.98CH36201), Boston, MA, USA, 19–21 October 1998; Volume 2, pp. 515–519. [Google Scholar] [CrossRef]
  54. Hancock, J.; Johnson, J.M.; Khoshgoftaar, T.M. A Comparative Approach to Threshold Optimization for Classifying Imbalanced Data. In Proceedings of the 2022 IEEE 8th International Conference on Collaboration and Internet Computing (CIC), Atlanta, GA, USA, 14–16 December 2022; pp. 135–142. [Google Scholar] [CrossRef]
Figure 1. VANET communications infrastructure. Source: [2].
Figure 1. VANET communications infrastructure. Source: [2].
Futureinternet 17 00174 g001
Figure 2. Challenges of SCMS in disconnected vehicular network.
Figure 2. Challenges of SCMS in disconnected vehicular network.
Futureinternet 17 00174 g002
Figure 3. Peak district environment extracted from Google Maps.
Figure 3. Peak district environment extracted from Google Maps.
Futureinternet 17 00174 g003
Figure 4. SCMS pseudonym certificate revocation. Source: [3].
Figure 4. SCMS pseudonym certificate revocation. Source: [3].
Futureinternet 17 00174 g004
Figure 5. Misbehavior report format. Source: [16].
Figure 5. Misbehavior report format. Source: [16].
Futureinternet 17 00174 g005
Figure 6. False M s g / M R f causes.
Figure 6. False M s g / M R f causes.
Futureinternet 17 00174 g006
Figure 7. M R attack.
Figure 7. M R attack.
Futureinternet 17 00174 g007
Figure 8. Proposed misbehavior reporting system (DRAMBR).
Figure 8. Proposed misbehavior reporting system (DRAMBR).
Futureinternet 17 00174 g008
Figure 9. Local Misbehavior Detection Mechanism (LMDM).
Figure 9. Local Misbehavior Detection Mechanism (LMDM).
Futureinternet 17 00174 g009
Figure 10. Process 2: online reporting illustration.
Figure 10. Process 2: online reporting illustration.
Futureinternet 17 00174 g010
Figure 11. MR Evaluation.
Figure 11. MR Evaluation.
Futureinternet 17 00174 g011
Figure 13. Simulated map of peak district area.
Figure 13. Simulated map of peak district area.
Futureinternet 17 00174 g013
Figure 14. DBSCAN clustering results for different scenarios: A C D - O C C , A C D - A B S , A C D - R S L .
Figure 14. DBSCAN clustering results for different scenarios: A C D - O C C , A C D - A B S , A C D - R S L .
Futureinternet 17 00174 g014aFutureinternet 17 00174 g014b
Figure 15. Isolation Forest results on DBSCAN noise points.
Figure 15. Isolation Forest results on DBSCAN noise points.
Futureinternet 17 00174 g015
Figure 16. Reporter classification results.
Figure 16. Reporter classification results.
Futureinternet 17 00174 g016
Figure 17. Confusion matrix representing DRAMBR performance in noise-based accuracy.
Figure 17. Confusion matrix representing DRAMBR performance in noise-based accuracy.
Futureinternet 17 00174 g017
Figure 18. Confusion matrix Rrpresenting DRAMBR performance in A C D - O C C (Left), A C D - A B S (Middle), A C D - R S L (Right).
Figure 18. Confusion matrix Rrpresenting DRAMBR performance in A C D - O C C (Left), A C D - A B S (Middle), A C D - R S L (Right).
Futureinternet 17 00174 g018
Figure 19. Accuracy, precision, recall, and F1-score across scenarios and overall system.
Figure 19. Accuracy, precision, recall, and F1-score across scenarios and overall system.
Futureinternet 17 00174 g019
Table 1. Misbehavior Report Structure.
Table 1. Misbehavior Report Structure.
LayerDescription
Encryption:MR is encrypted using ECIES with the MA public key, ensuring data confidentiality.
Digital SignatureMR includes a signature from the reporter’s private key to authenticate the source and maintain data integrity.
ToBeSignedDataContains essential information like the report’s hash and signature, which are signed by the reporter.
PayloadHolds metadata and headers for proper interpretation of the signed data.
Core DataContains the unencrypted misbehavior data, including the report’s hash for integrity verification.
Table 2. DRAMBR System Units.
Table 2. DRAMBR System Units.
UnitPurposeConnectivity Status
OBUDetecting misbehavior, generating MR, and interacting with other vehicles and the RS.Online/Offline
RSAggregates MRs and adjusts the RVs of vehicles.Online
DSRCFacilitates V2V and Vehicle-to-Infrastructure (V2I) communications.Online/Offline
MAGlobal misbehavior detection, generating and broadcasting CRLs and creating and storing the CRLs.Online
Table 3. R e p Veh Observing Misbehavior Action.
Table 3. R e p Veh Observing Misbehavior Action.
Notations Rep Veh Action
O(R)Observing misbehavior and generating M R .
O(NR)Observing misbehavior but not generating a M R .
O(NMR)Not observing misbehavior but generating a M R f .
O(NNMR)Neither observing nor generating.
Table 4. R e p Vehs Reporting Conditions and RS Actions.
Table 4. R e p Vehs Reporting Conditions and RS Actions.
ConditionDefinitionRS Action
O(R)Observing and reporting MR.Considers the MR based on the R e p Veh ’s RV and corroboration with other MRs.
O(NR)Observing misbehavior but not generating a MR.Relies on other R e p Vehs with high RV to compensate for missed reporting.
O(NMR)Not observing misbehavior but generating MR.Evaluates the R e p Veh ’s RV and checks aggregation from other reporters to detect inconsistencies.
O(NNMR)Neither observing nor generating.No action is taken, and the R e p Veh ’s RV remains unaffected.
Table 5. M R f Classification.
Table 5. M R f Classification.
ClassPurposeRS Action
1Implausible MR values.Checks if the MR values are realistic.
2Consistency checks with previous MRs.Compares an MR with earlier ones from the same R e p Veh .
3Validation against local knowledge.Checks the MR against the vehicle’s map or local data.
4OBUs-based MR validation.Compares the MR with the vehicle’s own sensors.
5Cross-MR consistency analysis.Checks if the MR agrees with other MRs for the same event.
Table 6. Actions Triggered by the RS for T a r Veh and R e p Veh .
Table 6. Actions Triggered by the RS for T a r Veh and R e p Veh .
Level Tar Veh Rep Veh
0No action taken.No action taken.
1A warning is sent for misbehaving.A warning is sent for M R f .
2The T a r Veh RV is reduced.The R e p Veh RV is reduced.
Table 7. DRAMBR Notations.
Table 7. DRAMBR Notations.
NotationDefinition
R e p Veh Reporting vehicle.
T a r Veh Target vehicle.
MRMisbehavior report.
M R f False misbehavior report.
τ R e p RV threshold.
R V V i RV for vehicle (i).
R V V i ( t ) RV at a specific time.
p c Pseudonym certificate.
M s g Received message in OE.
I inc Incident ID.
S I inc , k DBSCAN majority-reported status.
C i Confidence scores of R e p Vehs within the cluster in DBSCAN.
G I inc DBSCAN cluster.
τ i f iForest anomaly score threshold.
H ( x ) iForest harmonic number measures the number of splits to isolate MR.
c ( n ) iForest path length normalization.
R V V i new The updated RV.
Table 8. Accident scenarios and triggered MRs.
Table 8. Accident scenarios and triggered MRs.
Scenario Rep Veh Msg Type I inc Message ContentTriggered MR
ACD-OCCHonestAccurateIR-101“Accident detected at location X, proceed cautiously.”Reports malicious vehicles for denying the accident.
MaliciousConflictingIR-101“No incident at location X, road is clear.”Reports honest vehicles for claiming an accident.
ACD-ABSHonestAccurateIR-201“No incident at location X, road clear.”Reports malicious vehicles for claiming an accident.
MaliciousFalse PositiveIR-201“Accident detected at location X, avoid area.”Reports honest vehicles for denying the accident.
ACD-RSLHonestResolution UpdateIR-301“Incident resolved at X, road clear.”Reports malicious vehicles for claiming the incident persists.
MaliciousConflictingIR-301“Incident still active at X, avoid area.”Reports honest vehicles for claiming resolution.
Table 9. Simulation Details.
Table 9. Simulation Details.
ParameterValue
Traffic SimulatorSUMO 1.21.
Simulation Area (Rural)4 × 4 km 2 .
Network ConfigurationRealistic layout.
Communication Standard(DSRC) IEEE 802.11. P
Roadside Units (RSU)0.
Simulation Time1800 s.
Event Duration900 s.
Number of Vehicles100.
RV Threshold ( τ R e p )0.5.
MAC ProtocolIEEE 1609.4.
Output Filestripinfo.xml.
Table 10. DRAMBR Pipeline.
Table 10. DRAMBR Pipeline.
StepObjectiveMethodsOutput
GroupingGroup MRs by time/locationDBSCANMR Clusters (G1, G2, …)
DetectionDetect anomalous MRsIsolation ForestOutliers and Inliers
R e p Vehs ClassificationClassify R e p Vehs probabilisticallyGaussian Mixture Model (GMM)Honest, Malicious, Erroneous
Final ClassificationRobust decision-makingEnsemble (Random Forest + XGBoost)Final MR Labels
Table 11. Performance Accuracy Across Scenarios.
Table 11. Performance Accuracy Across Scenarios.
ScenarioMRsConsistentNoisePrecisionRecallF1-ScoreAccuracy
A C D - O C C 10083171111
A C D - A B S 100802010.970.980.96667
A C D - R S L 1009280.970.970.970.96667
Overall300--0.990.980.983330.9778
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almani, D.; Muller, T.; Furnell, S. Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR). Future Internet 2025, 17, 174. https://doi.org/10.3390/fi17040174

AMA Style

Almani D, Muller T, Furnell S. Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR). Future Internet. 2025; 17(4):174. https://doi.org/10.3390/fi17040174

Chicago/Turabian Style

Almani, Dimah, Tim Muller, and Steven Furnell. 2025. "Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR)" Future Internet 17, no. 4: 174. https://doi.org/10.3390/fi17040174

APA Style

Almani, D., Muller, T., & Furnell, S. (2025). Distributed Reputation for Accurate Vehicle Misbehavior Reporting (DRAMBR). Future Internet, 17(4), 174. https://doi.org/10.3390/fi17040174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop