Next Article in Journal
An Improved Routing Protocol for Optimum Quality of Service in Device-to-Device and Energy Efficiency in 5G/B5G
Previous Article in Journal
Industry 4.0 and Beyond: The Role of 5G, WiFi 7, and Time-Sensitive Networking (TSN) in Enabling Smart Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things

Cyber Security Research Group, School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
*
Author to whom correspondence should be addressed.
Future Internet 2024, 16(9), 346; https://doi.org/10.3390/fi16090346
Submission received: 20 July 2024 / Revised: 11 September 2024 / Accepted: 20 September 2024 / Published: 23 September 2024

Abstract

:
The Internet of Underwater Things (IoUT) represents an emerging and innovative field with the potential to revolutionize underwater exploration and monitoring. Despite its promise, IoUT faces significant challenges related to reliability and security, which hinder its development and deployment. A particularly critical issue is the establishment of trustworthy communication networks, necessitating the adaptation and enhancement of existing models from terrestrial and marine systems to address the specific requirements of IoUT. This work explores the problem of dishonest recommendations within trust modelling systems, a critical issue that undermines the integrity of trust modelling in IoUT networks. The unique environmental and operational constraints of IoUT exacerbate the severity of this issue, making current detection methods insufficient. To address this issue, a recommendation evaluation method that leverages both filtering and weighting strategies is proposed to enhance the detection of dishonest recommendations. The model introduces a filtering technique that combines outlier detection with deviation analysis to make initial decisions based on both majority outcomes and personal experiences. Additionally, a belief function is developed to weight received recommendations based on multiple criteria, including freshness, similarity, trustworthiness, and the decay of trust over time. This multifaceted weighting strategy ensures that recommendations are evaluated from different perspectives to capture deceptive acts that exploit the complex nature of IoUT to the advantage of dishonest recommenders. To validate the proposed model, extensive comparative analyses with existing trust evaluation methods are conducted. Through a series of simulations, the efficacy of the model in capturing dishonest recommendation attacks and improving the accuracy rate of detecting more sophisticated attack scenarios is demonstrated. These results highlight the potential of the model to significantly enhance the trustworthiness of IoUT establishments.

1. Introduction

Trust and reputation models have long been integral to security disciplines, serving as essential tools for evaluating the trustworthiness of entities within a network [1]. These models operate by aggregating and analyzing various pieces of evidence to derive trust values, which are critical for detecting misbehaviour. The trust evaluation process involves systematically collecting data on entity interactions, formulating a trust score based on these data, and continuously updating this score as new information becomes available [2].
In the context of security systems, recommendations play a pivotal role in shaping personal trust opinions. These recommendations often utilize the concept of indirect trust, which entails forming trust towards an entity based on the opinions of others [3]. Indirect trust enhances the robustness of trust assessments by integrating multiple viewpoints; however, it also introduces significant vulnerabilities. For instance, dishonest recommendation tactics can severely disrupt the processes of trust establishment, propagation, and maintenance. This disruption poses a threat to the integrity of the entire communication system [4]. A specific example of such disruption is the wrongful promotion of a benign node as malicious, aimed at isolating the node from network collaborations, thereby hindering its functionality and undermining the network’s overall security posture. Conversely, a malicious node could be promoted as benign through the manipulation of the trust model, achieved by propagating dishonest recommendations that interfere with the trust scores of these entities.
The Internet of Underwater Things (IoUT) is often characterized by its open and largely unmonitored nature [5]. The lack of a comprehensive security infrastructure in IoUT, coupled with inherent network sparsity, makes it a prime target for attackers who wish to exploit these vulnerabilities. These challenges are exacerbated by the underwater environmental conditions, where signal propagation and node mobility further complicate trust assessments [6]. Effective trust modelling in IoUT must address these limitations, as discrepancies in collected data can significantly impact trust decisions [7]. The challenge is particularly pronounced in scenarios with limited personal opinions, making it difficult to accurately establish trust. This issue becomes critical in environments where the lack of comprehensive and reliable trust data can lead to erroneous trust decisions, ultimately compromising the network’s security and functionality.
This paper proposes a novel recommendation evaluation process aimed at detecting dishonest recommendations within the context of trust establishment among IoUT nodes. The primary contributions of this paper are as follows. Firstly, we explore existing mechanisms for resisting and detecting dishonest recommendations, assessing their readiness and applicability within the unique context of IoUT. This involves a detailed analysis of current methodologies and their potential limitations when applied to underwater communication networks. Secondly, we develop a detection mechanism for dishonest recommendations that incorporates a combination of advanced filtering methods. This mechanism is designed to enhance the accuracy and reliability of detecting anomalies in recommendation data, thereby improving the overall trustworthiness of the network. Thirdly, we introduce a belief function that operates as a weighting method for recommendations. This belief function is based on several factors derived from the social aspects of validation, such as freshness, similarity, trustworthiness, and the decay of trust over time. Lastly, we conduct a comprehensive evaluation of the proposed method, benchmarking it against the current state-of-the-art approaches. This evaluation includes conceptual analysis and simulations to demonstrate the efficacy and robustness of the proposed method.
The remainder of this paper is structured as follows. We begin by providing the necessary background knowledge in Section 2, followed by the related work in Section 3. An overview of the proposed model is presented in Section 4, followed by a more detailed explanation of its components in the subsequent Section 5. Both Section 6 and Section 7 present case studies and the experimental results. Finally, Section 8 concludes the paper and outlines potential future work.

2. Background

This section provides essential background knowledge on the nature of IoUT, trust modelling, and the challenges posed by dishonest recommendations.

2.1. IoUT Network Model

Recently, IoUT has been envisioned as an infrastructure comprised of sensing devices designed to detect and collect ambient data alongside communication components capable of propagating signals underwater and transmitting data to on-shore stations, such as cloud servers, for further processing. Due to its practicality compared to other communication schemes, underwater devices primarily utilise acoustic communication. This method, while advantageous for underwater environments, poses significant challenges. Acoustic communication exhibits a high propagation delay, approximately 1.5 × 10 3 m/s, compared to the 3 × 10 8 m/s propagation speed of radio waves on land. Additionally, it suffers from low bandwidth and a high error rate. The aquatic environment significantly affects the performance and reliability of the network. Underwater networks are characterised as complex and dynamic, with high levels of noise, high-delay channels, and sparse and mobile node deployment [8]. Furthermore, underwater communication devices are typically battery-powered, and no efficient methods currently exist for frequently recharging or replacing these batteries. These limitations are evident in various types of underwater devices. For example, Autonomous Underwater Vehicles (AUVs) like the Bluefin-21, designed by General Dynamics, can dive to depths of 4500 m, travel at speeds of around 8.334 km/h, and have a total energy capacity of 7 kWh [9]. Similarly, the MIDAS WLR, a water level recorder developed by Valeport, features 16 MB of memory, consumes 0.3 W of power, and operates at depths of up to 600 m [10]. Existing trust mechanisms face several challenges when applied to the complex and dynamic nature of the undersea environment. The frequent changes in network topology caused by dynamic aquatic regions, influenced by factors such as water currents and surface winds, along with the movement patterns of underwater nodes, make it unreliable to rely solely on successful communication as primary observations to validate trustworthiness. Additionally, the open nature of IoUT and the high costs associated with regular monitoring and maintenance, combined with the fact that these devices are typically left unattended for long periods, make the network highly vulnerable to undetected hijacking.
This study explores the trust establishment of the IoUT, which comprises randomly distributed nodes that carry out sensing, data collection, and data relay to the surface base station using a hop-by-hop approach communicating over acoustic channels [11]. In trust management, each node evaluates the reliability of its neighbours using its available information. Figure 1 shows the entities involved in the trust establishment process. The evaluating node is the Trustor node (i), depicted in green, and the evaluated node is the Trustee node (j), depicted in grey. Trust evidence is the data used to calculate trustworthiness, resulting in a trust score from 0 to 1, where 1 means complete trust and 0 means none. The trustor is subject to a lack of evidence to judge the trustee, and therefore, it seeks input from neighbouring nodes. These responses, called recommendations, can come from fair (blue), malicious (red), or uninformed (white) nodes.

2.2. Trust Modelling

Trust modelling involves constructing frameworks that evaluate the reliability of internal entities, aiming to identify and mitigate risks from insiders who might misuse their access or knowledge. This approach emphasizes continuous monitoring and analysis of behaviour patterns and interactions within the system, using both direct experiences and insights gathered from neighbours to prevent and respond to security breaches from within the network [12,13,14]. Trust in most IoT and MANET systems exists in the following forms:
  • Direct Trust: This formation of trust is based on direct interaction between two entities. The computed trust value will be a result of the observation and experience based on past interactions. Mathematically, it can be seen as:
    T i j = f ( X ) { observations ,   past / current   experiences }
  • Indirect Trust: This formation of trust relies on others’ beliefs about an entity. A trust value will be computed based on a third-party opinion on a particular node. This can be carried out through a recommendation from either a mutual one-hop node neighbour or a multi-hop node recommendation using the trust’s transitivity property. Mathematically, this can be represented as:
    T i j = f ( X ) { second - hand   neighbour   recommendations }
A recommendation-based trust model is a system designed to evaluate the trustworthiness of entities within a network based primarily on the recommendations of others within that network. This concept is particularly useful in environments where direct interactions between entities are limited or non-existent, necessitating reliance on the feedback and experiences shared by others. The indirect aspect of trust refers to the trustworthiness derived not from direct interactions but from the recommendations provided by other entities in the network. Each entity in the network collects and aggregates recommendations about other entities [15].
Ensuring valid recommendations is crucial for maintaining the integrity of the network and enabling entities to make informed decisions. The validity of recommendations poses significant challenges, especially in the complex nature of underwater networks where communication can be intermittent and unreliable. The open nature of IoUT, which lacks a comprehensive security infrastructure, increases the likelihood of attacks. One issue is that the network’s sparsity renders most advanced recommendation systems ineffective in this context. This becomes particularly problematic when there is no current, relevant evidence, and due to the decaying nature of the trust, there will be high differences between the stored trust score and the recommendations. Additionally, there is a significant risk of mistakenly identifying honest nodes as dishonest due to link quality issues, leading to nodes being misjudged within the network and assigned incorrect trust levels due to communication misrepresentations.

2.3. Recommendation Attacks

In the context of trust-based recommendation systems within IoUT environments, certain malicious activities are aimed at manipulating trust metrics. The following list details the most prevalent types of attacks:
  • Bad-mouthing attack: A malicious node undermines the reputation of another node by providing false negative feedback. This form of attack is particularly challenging to detect when the attacker has a history of providing unbiased, accurate recommendations.
  • Ballot-stuffing attack: In this attack, a malicious node artificially boosts another node’s trust level by giving fake positive feedback, setting the stage for more complex and collaborative attacks.
  • Selfish node behaviour: This involves a node refusing to participate in the network’s trust establishment by not responding to recommendation requests, thereby withholding necessary collaboration with peers.
For the rest of the paper, we use the term dishonest recommendation to refer to recommendations made by malicious entities on the network to promote either bad-mouthing or ballot-stuffing attacks.

2.4. Challenges of Dishonest Recommenders

  • Dishonesty in recommendation is not always characterized by overtly negative behaviour or a generally low level of trustworthiness. Instead, a recommender, or node in this context, may exhibit seemingly normal behaviour in most interactions but engage in deceptive practices by deliberately providing false recommendations about others. This subtlety makes detecting dishonesty challenging because the overall behaviour of the node might not immediately arouse suspicion. For instance, the graph in Figure 2 illustrates the average trust score towards a dishonest recommender who continues to perform attacks as described in Section 2.3, following the model presented in [16]. In cases of trust manipulation, such as bad-mouthing and ballot stuffing, the trust towards the recommender might naively overlook the misbehaviour, resulting in a high trust score over time for the dishonest recommender. On the other hand, a selfish recommender can be detected if the frequency of its recommendation requests and responses significantly deviates from typical communication patterns. If the trust evaluation of the recommender takes into account the frequency of communication, unusual behaviour can be identified. Specifically, a selfish recommender who often refuses to provide recommendations will exhibit a lower frequency of outgoing recommendations compared to the norm. This deviation can be quantified and monitored over time, leading to a decrease in the trust score of the selfish recommender as their non-cooperative behaviour becomes evident.
  • In the field of recommender systems, two predominant methods for detecting dishonest recommenders are the Majority Rule-based [17] and the Deviation from Personal Experience-based approaches [15]. While simple and effective in certain contexts, the Majority Rule-based method falls short in scenarios with collaborative attacks, where a dishonest majority can skew the results. It also risks false positives by mislabelling honest recommenders with unique perspectives. On the other hand, the Deviation from Personal Experience-based method, which compares recommendations against individual user experiences following a defined threshold to detect recommendations that deviate from the personal experience and thus is able to isolate unfair negatives and unfair positive recommendations, offers a more personalized assessment. However, it struggles in situations with limited personal experience and is susceptible to the decay of trust over time, especially when lacking direct interactions with certain nodes. Both methods, therefore, while useful in their respective contexts, exhibit distinct limitations that can impact their effectiveness in accurately identifying dishonest recommenders. These two methods have been defined in the literature under what is called endogenous discounting [18]. The inherent fallacy of endogenous discounting highlights the confirmation bias introduced when claims deviating from prior expectations are ignored.

3. Related Work

Unreliable recommendations have been categorised into two types: dishonest recommendations and wrong recommendations [19]. Dishonest recommendations are a form of subjective unreliability where adversaries intentionally offer misleading advice. Wrong recommendations, on the other hand, represent objective unreliability that arises from unavoidable errors in transmission or computation. Wrong recommendations are expected to be prevalent in unpredictable and complex environments, such as the application of IoUT. Therefore, existing techniques for filtering dishonest recommendations should account for these challenges.
Addressing the challenge of mitigating the effects of dishonest recommendations in reputation- and recommendation-based trust systems remains challenging. In efforts to protect the integrity of reputation and recommendation systems from recommendation attacks, certain models have been developed. For instance, the Collaborative Reputation Mechanism (CORE) [20] filters out negative feedback under the assumption that ballot-stuffing attacks are non-existent, allowing only positive reputation data to spread and thus preventing bad-mouthing attacks. In contrast, the CONFIDANT [21] model focuses on circulating only negative reviews about other nodes to thwart ballot stuffing. Both approaches, however, struggle with efficiency in detecting unpredictable behaviours in complex networks as they permanently block either positive or negative feedback.
Khedim et al. outlines two primary methods prevalent in the literature for evaluating trust in networks: Deviation Tests and Evaluation using Trust Factors [15]. The deviation test involves comparing individual opinions about an entity against a predefined threshold to identify and isolate disproportionately negative or positive recommendations. This method is exemplified in the Distributed Reputation-based Beacon Trust System (DRBTS), a distributed security protocol that enables beacon nodes to monitor each other and relay trust information [22]. The network is modelled as an undirected graph, where deviation tests are used to ensure consistency between first-hand (personal opinion) and second-hand (recommendation) information, thereby preventing the spread of false data. Similarly, the E-Hermes protocol, introduced by Zouridaki et al., integrates first-hand trust data, which nodes compute independently, with second-hand trust data obtained from other nodes’ recommendations [23]. E-Hermes is designed to increase resilience against malicious nodes and recommenders by implementing a recommender’s test that validates recommendations against the first-hand trust values computed by the inquiring node. This process ensures that only recommendations closely aligned with independently assessed values are accepted. However, these techniques, especially in dynamic and complex environments like the IoUT, face limitations due to their heavy reliance on personal experiences, which can be problematic given the sparse and mobile nature of these networks. The decay of trust information and the infrequent opportunities for validation in such a rapidly changing network make it difficult to maintain accurate trust assessments. Additionally, the increased overhead and vulnerability to strategic manipulation by nodes adjusting their behaviour to evade detection further challenge the effectiveness of these methods in IoUT environments.
Others approach this issue by introducing so-called Trust Factors to help in judgment on the recommendations. For instance, in [24], a trust-based recommendation model known as TBRS for vehicular cyber–physical system networks is introduced. This model evaluates recommendations using trust factors such as contact intimacy, delivery reliability, and position intimacy. These factors are weighted according to their Grey relational degrees to enhance judgment accuracy. Additionally, the K-Nearest Neighbour (KNN) algorithm is employed as a filtering mechanism to mitigate the impact of selfish or malicious nodes. However, the effectiveness of the KNN algorithm can be limited by its sensitivity to the choice of k and its performance in high-dimensional spaces, which might not adequately reflect the dynamic and complex networks.
Within the same context, two trust evaluation models presented in [25,26] introduced several metrics to weight recommendations, defined as Familiarity (how well a one-hop neighbour is acquainted with the targeted vehicle), Similarity (degree of similar content), and Timeliness (the freshness of any reputation segment). Similarly, Shabut et al. [27] present a recommendation-based trust model designed to address the challenges of maintaining reliable packet delivery through multi-hop intermediate nodes in mobile ad hoc networks (MANETs). The proposed model introduces a defence scheme that employs a clustering technique to dynamically filter out dishonest recommendations from nodes, which may attempt to deceive the trust management system through bad-mouthing, ballot stuffing, and collusion. The core of the model is evaluating the honesty of recommending nodes based on three key factors: confidence based on the interactions, the compatibility of information (assessed through deviation tests), and the physical closeness between nodes.
Adewuyi et al. [28] introduce a belief function within the CTRUST model to assess the credibility of recommendations in collaborative IoT applications. This function is not designed to diminish trust in the recommendations but rather to modulate the influence of their recommendations based on temporal and relational dynamics. The belief function is mathematically derived from several components: the decay of trust over time, the existing trust scores between the recommender and the trustor, and the magnitude of change proposed by new recommendations compared to existing trust levels. The operational principle of this belief function is that it assigns minimal weight to recommendations when there are recent and direct observations that contradict these recommendations, thereby prioritizing empirical experience over hearsay. However, a potential issue arises when the calculated belief value becomes negative due to a large discrepancy between the trust scores provided by the recommender and the existing trust scores of the trustor. This situation is not explicitly addressed in the paper.
Few attempts have been proposed to address the issue of dishonest recommendations in underwater networks. For instance, the work in [16] introduces a trust model named CATM for IoUT. In this model, they propose a mechanism to evaluate recommendations based on factors like link stability and node reliability. These measurements failed to capture the dishonest recommendation attacks like bad-mouthing or ballot stuffing. In [29], a cluster-based trust model is introduced where recommendations are computed by the cluster head. Each sensor sends sensory data to the cluster head, which computes trust based on the assumption that data follow a normal distribution. Trust assessments then undergo median filtering to remove outliers and employ collaborative filtering to compute recommendations. This model, while robust, can be susceptible to inaccuracies in node importance assessment and deceptive behaviours among neighbouring nodes, potentially undermining the reliability of trust recommendations in dynamic underwater environments. Another study [30] introduced a trust model that utilizes both collaborative filtering and a variable weight fuzzy algorithm to exclude untrustworthy recommendations and pinpoint dishonest nodes effectively. This model applied various filtering methods informed by divination tests and the precision of link quality, and it used a preset threshold for each criterion. Based on assessments from recommenders, the link quality filter is vulnerable to manipulation by malicious entities who might falsify these data to evade detection in the model.
In light of the current efforts in the literature to establish a trustworthy model of IoUT that relies on recommendations from neighbouring nodes, this paper aims to investigate further the problem of detecting dishonest recommendations within the IoUT establishments. We address the issue considering the network sparsity, where personal experience may be lacking, by developing an evaluation method for received recommendations. We offer a comprehensive comparative analysis with existing models, focusing on the accuracy of detecting dishonest recommendation attacks of varying intensity within established networks.

4. Proposed Model

In this paper, we propose a new evaluation mechanism for received recommendations, as illustrated in Figure 3. The process begins with a filtering mechanism that combines two distinct phases. The first phase, outlier detection, identifies recommendations that significantly deviate from the majority opinion. The second phase, deviation exemption, examines the differences between the recommendations and direct trust values, filtering out those that show substantial divergence. This dual-phase approach aimed to enhance the accuracy and reliability of the recommendation evaluation process.
Following the work of [28], we then approach the issue further by introducing a belief method aiming to weight recommendations. The proposed belief method employs several key metrics: freshness, trustworthiness, similarity, and trust decay in constructing a belief system towards recommenders. Each of these factors contributes uniquely to the evaluation process. The following premises are the driving force to model the belief function.

4.1. Belief Assumptions

  • Let the change between the current recommendation on j provided by Recommender node (k) known as T k j ( t ) and let the previous observed trust computed by i towards j (direct trust) known as T i j ( t ϵ ) , where ϵ represents the proportion of time elapsed between constructing direct trust and receiving recommendations. The smaller the absolute proportion of change | T k j ( t ) T i j ( t ϵ ) | , the easier it is for i to accept the recommendation. Thus, B i j k ( T k j ( t ) T i j ( t ϵ ) ) .
  • The longer the time t that has passed since the last session of interaction between i and j, the more open i will be in accepting k’s recommendation. This is because of the trust decay; the longer the time goes by, the smaller the proportion of historical trust based on the last observation left. Thus, B i j k ( 1 d T i j ) .
  • The greater the trustworthiness of k, represented by the value of T i k , the more likely one is to trust the recommendation received from k. While this assumption alone is not necessarily correct, as explained in Figure 2, more sophisticated attackers might exploit this by intermittently or even consistently behaving correctly to mask their malicious actions. Nevertheless, a lower trust rating of recommenders can still indicate a potentially bad recommendation. Thus, B i j k T i k .
  • The opinion computed by k is more valuable if it is recent, as newer opinions are generally more relevant than older ones. The freshness Γ k j is defined as the duration between the present time and the moment when T k j ( t ) was formed. Thus,  B i j k Γ k j .
  • In dynamic environments, it is often observed that nodes increasingly focus on others exhibiting similar behaviours over time [31]. Trust, drawn from social constructs, is interpreted through the lens of interaction consistency between two entities [13]. A relationship’s strength is presumed to be stronger with more consistent interactions over time. Furthermore, the relative duration or depth of a relationship compared to other peers indicates the level of similarities between entities. This is particularly evident in collaborative networks like IoUT, where all nodes are initially assumed to collaborate equally to achieve the needed coverage and maintain connectivity [32].
    Let s i k ( t ) represent the degree of similarity between i and k; the more similarity, the more likely it is for i to believe the opinion provided by k, and therefore, B i j k s i k ( t ) .
These assumptions permit better understanding of the complexity of the dishonest recommendation problem, which is particularly challenging in the IoUT context. The long propagation delays and continuous mobility due to water currents render single-assumption solutions ineffective. This issue arises not only when malicious nodes provide dishonest recommendations but also when more sophisticated dishonest recommenders attempt to mask their deceptive behaviour with normal collaboration in other network aspects.
We anticipate establishing a trust management system with periodic updates on the evaluation of trust among nodes. Assuming node i wants to compute the trust score towards node j, since our focus is on examining the problem of dishonest recommendations, we assume the trustor i computes the direct trust of the trustee j following the MATMU model [33]. The workflow of the proposed approach is as follows:
Step 1: Node i sends a recommendation request to its 1-hop neighbours regarding node j. This is carried out by transmitting a request recommendation packet. Node i then sets a timer to receive recommendation responses about j. Nodes that do not cooperate are considered selfish nodes.
Step 2: Upon receiving recommendation responses from all neighbours, node i begins the evaluation process, as outlined in Figure 3, to assign a corresponding weight to all the received recommendations. This step involves analysing the received forwarding response messages to gauge the recommendations.
Step 3: Node i applies the necessary punishments to nodes that misbehave and stores this information for the next trust evaluation process.
Step 4: Node i uses the assigned weights to evaluate the recommendations. In this work, we apply the following equation to update the indirect trust score:
T i j i n d ( t ) = k = 1 n B i j k T k j , j k ,
where T i j i n d ( t ) represents the indirect trust computed by i towards j, based on recommenders k, with the number of recommendations received being n > 2 .
The proposed method is thoroughly explored in Section 5, emphasizing both the rationale and the examination of the aforementioned concepts.

5. Recommendation Evaluation Process

Let R j ( t ) = { T k 1 j , T k 2 j , , T k n j } be the set of recommendations received upon a request from node i to nodes: k 1 .. k n about j, where n represents the number of one-hop neighbours willing to respond to the recommendation request at time t. We define the first subset of R j ( t ) as S 1 j R j , as a list containing recommendations that have been detected as outliers. Then, a second subset, derived from S 1 j , as S 2 j S 1 j , contains all recommendations that deviated from the computed trust by i. The following subsections highlight the main process involved in the proposed model.

5.1. Initial Filtering Process

In this work, we introduce a two-step filtering process that integrates the majority rules approach with the deviation technique to ensure thorough scrutiny of recommenders before subjecting them to a penalty process. The output of this process involves recommenders that deviate significantly from a given trust score, T i j ( t ) , and also appear as outliers when compared to other recommendations received.
We first examined two methods of outlier detection: a simple method based on Median Absolute Deviation (MAD) and a more sophisticated yet computationally intensive method such as Local Outlier Factor (LOF) [34]. Upon isolating S 1 j , which includes potential outliers, each element within this list is compared against personal experience through the divination test on S 2 j . A noteworthy issue arises when T i j ( t ϵ ) becomes outdated due to an extended period since the last valid interaction between nodes i and j. This outdated value fails to provide a reliable baseline for testing deviation. To address this, we propose a deviation test that includes the temporal decay of trust, defined as follows:
| T i j ( t ϵ ) T k j | ( 1 d ) d h ,
where T i j ( t ϵ ) denotes the trust score stored by node i regarding node j at a previous time instance ( t ϵ ) , and T k j denotes the trust recommendation received from node k. The variable d is the decay factor, with more recent T i j values resulting in a lower d compared to older values.
Both outlier detection methods exhibit satisfactory accuracy, with the MAD method achieving approximately 90.59% accuracy and the LOF method achieving 92.5% accuracy when tested over 100 trials of dishonest recommendations. The selection of the appropriate outlier detection method is highly contingent upon the computational complexity requirements of the specific application in which it is deployed.
We estimate the computational complexity of each method by first applying the outlier detection mechanism followed by the deviation test (as O D ) as follows. Let n denote the size of the recommendation set | R | , and let m denote the size of the filtered recommendation set | R | , where m n . The computational complexity of the MAD method is estimated to be O ( n log n ) , whereas the complexity of the LOF method is approximately O ( n 2 log n )  [35]. In the proposed filtering approach, the overall computational complexity is estimated to be O ( m ) + O ( n log n ) for MAD, or O ( m ) + O ( n 2 log n ) for LOF. For the remainder of the paper, we employed initial filtering using the MAD due to its simplicity (isOutlier).
Following this, we initiate a monitoring system where each node’s frequency of appearing as in S 2 j is tracked. A counter is set up for each node to record the number of times it appears on the outlier list. If a node’s appearance as a   2 j exceeds a predefined threshold, indicative of consistently poor or suspicious recommending behaviour, a penalty is then applied to its trust score, T k i ( t ) , as outlined in Section 5.4. Algorithm 1 outlines the process of the algorithm.
Algorithm 1 Recommendation Evaluation Process
Input: List of Received Recommendations R j = { T k 1 j , T k 2 j , , T k n j }

Require:  d h (Trust deviation threshold)
Require:  v h (violation threshold)
  1:
Initialize evidence collection variable
  2:
for Each T k j ( t ) in R j ( t )  do
  3:
       if  T k j ( t )  isOutlier then
  4:
              S 1 j = S 1 j { k m }
  5:
       end if
  6:
end for
  7:
for Each T k j in S 1 j  do
  8:
       if  | T i j ( t ) T k j ( t ) | ( 1 d ) > d h  then
  9:
             S 2 j = S 2 j { k j }
  10:
          Initiate counter c k ( t ) and increment by one.
  11:
          if  c k v h AND B i j k ( t ) < A v e r a g e ( B i j k ( t ) )  then
  12:
                  penalty on T k i , based on Equation (11)
  13:
          end if
  14:
       end if
  15:
end for
  16:
Evaluates the Belief Process ( B i j k ( t ) ) .

5.2. Definition of Similarity

In this section, we explore the definition of similarity we have established and thoroughly discuss how it is driven in existing trust models.

5.2.1. Confidence Level as a Measure of Similarity

This method leverages the level of confidence in the trust values assigned to different nodes based on the communication period with time. High confidence in a node typically suggests reliability, assuming that the source node consistently provides successful interactions. To explore the meaning behind this measurement in evaluating recommendations, we begin by further analysing the confidence in two known trust models.
In the literature, two robust models have been extensively utilized to establish trust: the beta-based reputation model [36] and the subjective logic model [37]. Both models are recognized as foundational for assessing trust and reputation, allowing for the quantification of trust with a degree of uncertainty, hence supporting the development of confidence measures. Figure 4 illustrates the representation of each model measurement to construct the trust.
The beta-based model employs the beta probability distribution to analyse the behaviour of entities within a system through observed actions, such as successful or failed packet deliveries. This model relies on the beta distribution, a type of continuous probability distribution that ranges between 0 and 1, to predict the likelihood of favourable outcomes based on historical data. The parameters α and β define this distribution, representing the counts of positive and negative outcomes, respectively. Figure 4a shows the beta distribution for various p with a fixed good and bad experience factor. To address the problem of evaluating dishonest recommendations, several studies in the literature attempt to estimate the confidence score for recommending nodes [23,27,38,39,40]. Within a beta distribution framework, confidence is deduced from the variance in historical interactions. For instance, if node i engages in both positive and negative interactions with node k at time t, the confidence is calculated by utilizing the deviation from the mean, emphasizing the stability and predictability of their interactions over time such as [40]:
c i k = 1 12 α i k β i k ( α i k + β i k ) 2 ( α i k + β i k + 1 ) ,
where α i k represents the aggregated positive observations when a node forwards packets, and β i k represents the aggregated negative observations when a node drops packets. One notable work, presented in [27], focuses on evaluating the honesty of recommending nodes based on three key factors: the confidence derived from interactions, compatibility of information (assessed through deviation tests), and closeness between nodes. In their model, the confidence value, denoted as V i k c o n f , is derived as: V i k c o n f = 1 12 σ i k , where σ i k is the beta distribution variance between i and k. This formula ensures that the confidence value lies within the interval [0, 1], where 0 indicates no previous interactions (hence, no confidence), and 1 signifies complete confidence based on substantial positive and negative interactions. Similar to the above, Shabut et al. [40] introduces a trust model that includes several metrics derived from social trust to evaluate recommenders, which are honesty (the mean of beta distribution), frequency-based social trust (which represents the confidence similar to the previous work), and intimacy.
The subjective logic model functions based on subjective perceptions of the world and employs opinions to signify these perceptions. The fundamental idea is to represent trust through belief, disbelief, and uncertainty [12]. The trust value T is given by: T = ( b , d , u ) , in which b stands for belief, d is disbelief, and u accounts for the uncertainty related to the trustworthiness of a node, and b + d + u = 1 . In the case of absence of belief or disbelief, a base rate (r) is defined as the prior probability. The relationship between these variables is shown in Figure 4b, where each vertex of the triangle represents a pure state of either complete belief, complete disbelief, or total uncertainty, with intermediate points indicating varying degrees of each.
Similar to the beta model, several studies aimed to evaluate recommendations through the measurement of confidence. Within the subjective logic models, confidence can be derived from the uncertainty of the subjective logic. In both models proposed in [25,26], the weight for recommender α ( v i , v j , t ) would be considering the uncertainty of the subjective logic. While the term ‘familiarity’ is used in this work, its representation is equivalent to the confidence level using subjective logic. Therefore, we will replace ‘familiarity’ with ‘confidence’ in their work to avoid confusion. In both works, they introduced the confidence in subjective logic to measure how much vehicle i (rater) is familiar with vehicle j (rate) as [25,26]:
c ( v i , v j , t ) = 1 u ( v i , v j , t ) ,
where u ( v i , v j , t ) represent the uncertainty of the subjective logic. The higher the confidence value is means that the rater has more prior knowledge about the rate.
Figure 5 shows the result of applying both Equation (3) and Equation (4) using varying numbers of interactions. Initially, the confidence level is 0, where both entities just initiate the communication. Both graphs increase the level of confidence upon increasing the interactions (both successful and unsuccessful ones). Figure 5a exposes a non-uniform confidence distribution, with colour changes from blue to yellow not solely tied to successful packet exchanges but also occurring with an increase in unsuccessful ones. This points to a complex and unexpected correlation wherein more unsuccessful interactions do not necessarily erode confidence, contrary to typical anticipations.
In contrast, Figure 5b demonstrates a more homogeneous distribution of confidence. Here, the consistent colour gradient from blue to yellow suggests a direct, predictable rise in confidence in tandem with the number of successful packets. Unlike the beta logic, where confidence could alter erratically with unsuccessful interactions, the subjective logic model indicates a steady enhancement of confidence.
Nevertheless, in both models, we can confidently say the increase in interactions increases confidence, irrespective of their nature, to some extent. This shows that the accumulation of interactions, regardless of outcome, contributes to the confidence metric. However, this could inadvertently elevate the confidence metric in relationships characterized by a high frequency of negative interactions, which might not accurately reflect the intended notion of confidence in the similarity context. This issue arose in the nature of underwater communication, with high packet error rates.
To address this issue, we defined the confidence variable as the expectation of how k would behave, and therefore, it can be derived in both models as:
c i k = b + u r c i k = α i k ( α i k + β i k ) α i k / ( α i k + β i k ) x = 1 n α i x / ( α i x + β i x )
The results of both models utilising Equation (5) are illustrated in Figure 6. In the beta-based model and unlike the previous observations on Figure 6a,b, there is an increase in confidence with more positive interactions and a decrease with more negative interactions, which contrasts with earlier observations. A significant issue with this method is the initially high confidence level despite the absence of prior interaction data. On the contrary, and using the subjective logic method in Figure 6b, the colour gradient demonstrates a more linear decrement in confidence as the number of unsuccessful packets increases while simultaneously maintaining a relatively high number of successful packets. This characteristic implies a significant sensitivity to the ratio of successes to failures within the given sample space.
Therefore, in this work, we define the confidence from one node to another, based on the consistency of interaction utilising the subjective logic approach as:
c i k = b + u r

5.2.2. Familiarity as a Measure of Similarity

While confidence measures the degree of consistency by taking into account the uncertainty within a continuous interaction between two entities, it does not capture the depth of interactions compared to other neighbours. In a collaborative network, it is important to expand the concept of similarities to include the behaviour of neighbours as a form of familiarity. Therefore, we further expand the notation and define the concept of “familiarity” as the measure of closeness between nodes, which is determined by the duration of adjacency between the target node and the recommended node. This concept emphasizes that nodes should give more weight to suggestions from neighbours with whom they have a long-term relationship, as opposed to those with a short-term association. This approach to defining relational familiarity highlights the importance of the frequency and duration of interactions in evaluating the closeness between different nodes, as noted in [30]:
f i k = n u m i k n u m i N τ 1 n u m i k ,
where n u m i k is the number of successful communications between i and k, and n u m i N is the number of successful communications i has with all N neighbours. τ is the adjustment factor of the number of communications in the range between 0 and 1.

5.2.3. Similarity Computation

In this model, we articulate the concept of similarity based on two aspects: confidence based on subjective logic as well as familiarity among neighbours. Both variables allow for a balanced consideration of both abstract trust (based on subjective logic) and empirical evidence (based on direct interactions), and they are integrated using a weighted sum as follows:
s i k ( t ) = w 1 f i k + w 2 c i k
where s i k ( t ) represents the similarity score between node i and node k at time t, f i k denotes the familiarity score between the nodes, c i k is the confidence score derived from subjective logic, and w 1 and w 2 are the weights assigned to the familiarity and confidence scores, respectively.

5.3. Belief Process

Based on the assumptions we highlighted in Section 4.1, we introduce the belief function, outlined in Algorithm 2, which evaluates the credibility of recommendations by integrating several key factors: freshness, similarity, trustworthiness, and decay. Each factor contributes to the overall belief score, which is used to assess the belief in recommendations.
The freshness factor, Γ k j , emphasizes the importance of the recency of the recommendation. It ensures that more recent recommendations are given higher weight. This interval adheres to a power-law distribution, expressed as [25]:
Γ k j = η s c ( t t T k j ) ε ,
where η s c is a predetermined scaling constant, and ε is the exponent of the power law. This formula is used to underscore the significance of the recency and freshness of the trust evaluation made by a node.
The similarity factor, s i k ( t ) , quantifies the extent of similarity between the trustor’s previous interactions with recommenders, as detailed in Section 5.2.3.
The decay factor, d, accounts for the degradation of the trust score over time. This is particularly important to adjust the weight of the trust score based on how recent the interactions are. The decay is computed using the technique provided by MATMU.
The overall belief score, B i j k , is computed by integrating the aforementioned factors. The integrated formula is:
B i j k = s i k ( t ) × Γ k j × ( 1 d T i j ) × T k i ( t ) k s i k ( t ) × Γ k j × ( 1 d T i j ) × T k i ( t ) , k k
where s i k ( t ) is the similarity factor, Γ k j is the freshness factor, d T i j is the decay factor for the trust score that the trustor i has towards the trustee j, and T k i ( t ) is the trustworthiness of the recommender k at time t. Given this linear computational complexity of the proposed belief function and the relatively small number of metrics involved, the computational cost is often negligible in practical applications.
Algorithm 2 Belief Process
Input: List of Received Recommendations R j ( t ) = { T k 1 j , T k 2 j , , T k n j }
Output: Weight of each Recommenders
  1:
Initialize evidence collection variable
  2:
for Each T k j in R j ( t )  do
  3:
    computes s i k ( t )
  4:
    compute Γ k j
  5:
    compute d T i j
  6:
    find T i k
  7:
     B i j k = s i k ( t ) Γ k j T i k ( 1 d T i j )
  8:
    scale the results
  9:
end for

5.4. Penalty Process

The penalty process aims to adjust the trust value of a node based on its history of violations. The trust value for node k obtained by i at time t, denoted as T i k ( t ) , is subject to a penalty factor that diminishes the trust based on the number of violations, λ , recorded for the node. This adjustment is represented by the following equation:
T k i = T k i ( t ) 1 1 exp λ
In this equation, λ represents the number of violations where the node has been alarmed during the recommendation evaluation process. The penalty factor, 1 1 exp λ , decreases the trust value exponentially as the number of violations increases. This model ensures that each additional violation results in a progressively smaller decrement in the trust value, reflecting an exponential penalty mechanism as shown in Figure 7.
Violations are determined based on two criteria: the count of appearances in the initial filter and the belief score of the nodes. Penalties are applied if a node is detected in the filtering process and has a belief score lower than the average belief score of all recommenders. This method ensures that nodes frequently flagged by the filter and with relatively low belief scores are appropriately penalized.

6. Conceptual Analysis

In this section, we evaluate the performance of our proposed model using the recommendation-based trust model presented in [30] as a benchmark. For clarity, we have standardized the notation: trustor (i), trustee (j), and recommender (k), following the common naming in [27,28].

6.1. Property 1: Rejecting Recommendations from Malicious Recommenders and Accepting Recommendations from Honest Recommenders

We examine the capability of node i to reject recommendations from malicious nodes and accept those from honest nodes.
  • Claim 1: The proposed recommendation model can reject recommendations from malicious nodes.
  • Rationale: CFFTM employs several conditions to process collected recommendations. However, we demonstrate that node i might inadvertently accept recommendations from malicious node k. In the CFFTM model, each recommendation is represented as T k j , accompanied by two additional values, L i n k k j and C o m j k . Here, C o m j k denotes the communication status between the recommender node and the trustee node, while L i n k k j signifies the link status when the trust score is obtained. These values are gathered by the recommender and included in the packet sent to the trustor, who will use them to assess the recommendation.
The CFFTM model begins by filtering unreliable recommendations from the original trust list, such that for each recommendation T k j , unreliable values are identified as those where | T k j T i j | θ . Subsequently, the model assesses the link quality using the L i n k k j data provided by the recommender about its interaction with trustee j. If the link state L i n k k j is below a certain threshold, the recommendation is deemed erroneous due to a link error. Otherwise, it is considered dishonest. A new list of recommendations is then generated, excluding both erroneous and dishonest recommendations.
CFFTM succeeds in rejecting dishonest recommendations as long as T i j is current and up to date. Although the values of θ and the link state threshold are not specified in [30], we argue that relying solely on the deviation test may still result in the acceptance of dishonest recommendations. This limitation might restrict the model’s applicability to specific situations, depending on the density and environmental conditions of the underwater network. One example would be when a significant amount of time has passed since the direct interaction between i and j. Due to the decay property of trust, the stored trust will also decrease. Malicious nodes, aware of this, can engage in bad-mouthing j, distributing lower scores, which are then accepted by CFFTM. Since T i j degrades over time due to the sliding time window with an exponential decay mechanism, the model may end up filtering all honest recommendations as dishonest if there is no interaction in the current period. For example, at a previous time t, node i estimates the trust score towards node j to be 0.9. Without interactions for subsequent periods, the trust score decays with a factor α = 0.8 over a window size of 5. The trust score decays as follows: 0.9, 0.72, 0.576, 0.4608, 0.3686. A dishonest recommender k propagates T k j = 0.2 (bad-mouthing j where the actual T k j should be >0.5). In this scenario, i will accept k’s recommendation given that | T k j T i j |     θ , leading the model to accept the malicious recommendation.
To address this issue, the proposed model employs multiple metrics to collectively determine the weight of each recommendation rather than rejecting them outright. Dishonest recommendations are identified by obtaining lower weights, rendering them ineffective. This is achieved by utilizing the proposed initial filtering method and the belief function to enhance the weighting mechanisms affecting the evaluation of trust. Initially, we prioritize the prevailing opinion to filter out potential malicious outliers. However, due to the complexity and unpredictability of underwater communication, where an honest recommender can be treated as an outlier, we further scrutinize these outliers against individual opinions obtained by node i. While this approach alone may still allow some advanced collaborative attacks to succeed, the belief method acts as an additional layer of filtering, serving as a secondary protection against dishonest recommendations that bypass the proposed initial filtering method. This method adjusts the weight of recommendations by considering several factors that collectively work to reduce the influence of untrustworthy sources, outdated stored opinions, and nodes with little similarity to the trustor over time. The similarity factor ensures that recommendations from nodes with historically similar behaviour to the trustor are given more weight, thereby prioritizing reliable sources. If a malicious node has a history of interactions that diverge significantly from the trustor’s interactions, the similarity factor between them is low, and recommendations from that node will carry less weight. The freshness factor prioritizes recent interactions, diminishing the influence of outdated recommendations, which is critical in dynamic environments where trust levels fluctuate over time. If a malicious node provided a recommendation in the past but has not interacted recently, the freshness factor will lower the weight of that recommendation. Finally, the inherent trustworthiness of the recommender node at the current time, influenced by historical behaviour, ensures that consistently honest nodes are trusted more, while those with a history of dishonesty are penalized. If a node has repeatedly provided false recommendations, its current trustworthiness will be low, making its recommendations less influential.
  • Claim 2: The proposed recommendation model can accept recommendations from honest nodes.
  • Rationale: Both CFFTM and the proposed model succeed in accepting recommendations from honest recommenders if the nodes have a valid and current personal opinion on j. However, if T i j is outdated and new information becomes available, CFFTM might misclassify honest recommendations as dishonest. Suppose T i j = 0.4 and all neighbours send recommendations for j in the range of 0.8 to 0.9. Since the deviation test is the primary filter and if θ 0.3 , then all honest nodes will be deemed dishonest, and their recommendations will be rejected.
With the proposed filtering method, we first identify outliers among recommendations, followed by the proposed deviation test. If honest recommendations constitute the majority (most recommendations fall between 0.8 and 0.9), they are accepted according to the majority rule. If dishonest recommendations dominate, hence, T k j is flagged as an outlier, then the subsequent deviation test ensures fair filtering, favouring personal experience while considering the decay in Equation (2). Moreover, with the belief function, the weighting is further scaled based on the behaviour of recommenders, as described in the previous claim.

6.2. Property 2: Punishment for Malicious Recommendations and Avoiding Punishment for Accurate Recommendations

We examine the capability of node i to correctly punish k when engaging in dishonest recommendations acts.
  • Claim 3: The proposed recommendation model correctly punishes dishonest recommenders and avoids unnecessary punishment for accurate recommendations.
  • Rationale: In the CFFTM model, the L i n k k j is used to filter recommendations, identifying either dishonest recommenders or those struggling with communication. The model then identifies dishonest recommendations based on a threshold, such that those with good link quality yet providing recommendations that deviate from personal trust will be treated as malicious and punished accordingly. This approach can lead to inaccurate penalties since the recommenders provide the L i n k k j and may forge these values to avoid punishment. We contend that this decision should solely rely on the information obtained by i about k. Therefore, the proposed model employs a threshold of a number of violations based on the initial filter as well as evidence of lower belief compared to others to penalise a node. This punishment is reflected in the trust score towards k, addressing the issue illustrated in Figure 2, where a dishonest recommender could continue being perceived as trustworthy despite compromising the accuracy of the trust model through the propagation of dishonest recommendations. Revisiting the previous examples of discrepancies in the recommendations, we observe that an honest recommender might be flagged as dishonest despite having an accurate and good L i n k k j . This misclassification would result in an honest node being unfairly punished as a malicious recommender.

7. Experiments and Performance Evaluation

In this section, we outline the methodology and experimental setup employed to rigorously evaluate the performance of the proposed recommendation evaluation process within an underwater network environment. To conduct a realistic simulation environment, we developed a trust-based model integrated with the AquaSim-NG network simulator [41], a discrete-event simulation tool built on NS-3 [42]. This approach was selected for its ability to accurately simulate the distinctive characteristics of underwater communication networks, utilizing the sound speed profile outlined in [43] and incorporating environmental variations based on data in [44]. Additionally, we employed the Ekman mobility model to replicate the dynamic behaviour of water currents. The network was configured with multiple randomly placed underwater nodes, adhering to the specifications detailed in [33]. Within this network, random nodes initiated data packet generation following an on/off method, with data exchanged between nodes and collected by a sink node positioned at the sea surface. Sending separate packets for recommendations introduces communication overhead, which can affect overall network performance. This impact depends on several factors, including the frequency of trust updates, the application’s latency requirements, and the underlying routing protocol. In these experiments, we embedded the recommendation information within the packet headers.
Within this trust-based model, we employed the MATMU model to compute the direct trust of each node [33]. The proposed recommendation evaluation techniques were applied to assess indirect trust based on 1-hop recommendations. We introduced varying proportions of malicious and dishonest recommendations in each simulation test. Based on our experiments, the thresholds for deviation and violation were set at 0.1 and 2, respectively, as these values were reasonable and enhanced the model’s accuracy.
The methodology is designed to comprehensively assess the effectiveness, attack resistance, and comparative performance of the proposed approach. Initially, we focused on evaluating the feasibility of the recommendation evaluation process in detecting misbehaving nodes. This step was essential to establish the baseline performance of the model. Subsequently, we examined the attack resistance of the network by simulating scenarios both with and without the proposed recommendation filtering method. This analysis was crucial to demonstrate the robustness in mitigating malicious behaviour. Finally, we conducted a comparative analysis against existing methods to evaluate the relative performance of the proposed approach. This comparison was justified by the need to benchmark our method against current models, thereby providing a clear understanding of its advantages and limitations.
Details of the simulation parameters and specific configurations are presented in Table 1. The following subsections proceed to provide a thorough and justified evaluation of the proposed recommendation process, highlighting its strengths and areas for potential improvement.

7.1. Effectiveness Evaluation of the Proposed Model

We conducted 20 trials with different network topologies that randomly distributed underwater nodes to evaluate the model’s performance under diverse and random establishments. This approach is consistent with similar studies such as [45] to average the results based on multiple runs. In each trial, we examined the effectiveness of the proposed recommendation evaluation model. We varied the percentage of malicious recommendations from 0% to 50% in each trial, with 0% indicating no attacks. We then examined the trust establishment process under both bad-mouthing and ballot-stuffing recommendation attacks, where a proportion of neighbouring nodes propagate dishonest recommendations. In the case of a ballot-stuffing attack, we introduced malicious nodes into the network that behaved badly. In these experiments, malicious nodes refused to cooperate after consuming their energy to mimic selfish attacks. We set dishonest recommendations to artificially boost the trustworthiness of malicious nodes by increasing their trust scores. Within each test, we set the initial trust evaluation process to begin at 40 s of the simulation and periodically updated it every 60 s [33].
We examine the impact of different attack percentages on the belief score ( B i j k ) obtained in the proposed model among benign and malicious nodes. The belief scores of the nodes were measured, and the results are presented in Figure 8 and Figure 9, which represent the frequency distribution of obtained belief scores for both benign and malicious nodes across all experiments under ballot stuffing and bad-mouthing, respectively.
Figure 8 shows how frequently each belief value appears among the nodes, effectively illustrating the impact of varying attack intensities on the belief assignments to benign and malicious nodes. At 0% attack, only benign nodes are present. The histogram for benign nodes (shown in blue) is relatively narrow, indicating a tight distribution around the mean belief. The belief score obtained here reflects the weight assigned by the model to each recommender among the received recommendations, with the variation reflecting the number of recommendations received by nodes and the associated weight assigned to them. This suggests that overall, benign nodes have consistent belief scores under normal conditions, with slight variations depending on the number of recommendations and the factors considered in the computation of the belief model, such as similarity, trustworthiness, and freshness.
As the attack percentage increases from 10% to 50%, the graph displays both benign (blue) and malicious (red) nodes. There is a noticeable shift in the belief assigned to malicious nodes compared to benign ones, with malicious nodes typically receiving lower belief values. This indicates that the proposed model is capturing their altered and potentially disruptive behaviour. The distinction between the blue and red distributions diminishes as the attack percentage rises, highlighting the increasing influence of malicious actions. In the worst-case scenario, with 50% of nodes being malicious, the proposed model reduces the belief scores for the malicious nodes but still faces difficulty in fully separating the distributions compared to less intensive attacks. This challenge arises because when half of the network consists of malicious nodes, their collective influence becomes significant enough to obscure the clear distinction between benign and malicious behaviour.
Similarly, Figure 9 illustrates the belief scores acquired during bad-mouthing attacks. The proposed model effectively recognises malicious nodes by lowering their belief scores. However, its performance declines in scenarios where half of the recommendations (50%) are malicious, though it still exhibits slight resistance favouring benign nodes. This behaviour is expected, as malicious recommendations attempt to mask their deceptive acts with seemingly normal behaviour, thereby making it more challenging for the model to accurately distinguish between malicious and benign nodes under such conditions.
We assess the indirect trust obtained towards targeted nodes by each node in the network following Equation (1). Figure 10 shows the resulting indirect trust across 20 distinct network topologies and varying attack scenarios. For each attack, the indirect trust values are collected, and their distribution is assessed through quartiles (Q1, median, and Q3), highlighting the central tendency and variability. With no attacks, the proposed evaluation of recommendation methods shows a fair weighting of recommendations in both types of attacks (high trust scores for benign nodes exhibited in Figure 10b during bad-mouthing attacks and low trust scores for malicious nodes being promoted in Figure 10a during ballot-stuffing attacks). It also shows resistance to attacks during the increase in the attack percentage. Among trials, we noticed some outliers, which are mainly due to changes in the environment and simulation settings.
Figure 11 shows indirect trust evaluations under dishonest recommendation attacks, determined across twenty trials. Within each graph, the line plot aggregates the average indirect trust scores from all the trials, providing a packed view of how trust in nodes is affected over time during each specific attack. The shaded area in these graphs encapsulates the spread of the scores (from min to max) across all nodes where different percentages of attacks are employed. In Figure 11a, we assess the indirect trust score for malicious nodes that refuse to collaborate after consuming 10% of their energy. Initially, the indirect trust score is high, indicating appropriate behaviour from the targeted nodes. Upon misbehaviour, in the case of 0% dishonest recommendation attacks, we observe a general decline in the indirect trust score of these malicious nodes. When dishonest recommendations are introduced, the proposed evaluation model demonstrates accuracy by further reducing the indirect trust scores and maintaining stability over time, even as the attack persists. A similar observation can be obtained in Figure 11b, which shows the indirect trust towards benign nodes that are facing bad-mouthing attacks.

7.2. Attack Resistance

To evaluate the effectiveness of the proposed model, we examined three distinct variables, as shown in Figure 12. The first variable is the anticipated value, which signifies the recommended trust level of a node assuming no unreliable recommendations in the network. This ideal scenario assumes a high trust score (0.9) for normal nodes and a low trust score (0.25) for malicious nodes. The second variable is the computed recommended trust level using the defensive recommendation evaluation process proposed in this paper. The third parameter is the average recommended trust level without employing any validation on the received recommendations.
As depicted in Figure 12b, during bad-mouthing attacks, the trustworthiness of normal nodes declines as the number of malicious nodes increases. Similarly, in ballot-stuffing attacks, shown in Figure 12a, the trust value of malicious nodes increases with their numbers, but the changes within our proposed method remain manageable. This is because, without a filtering mechanism, each recommendation node is assigned the same weight, leading evaluating nodes to accept all propagated recommendation values indiscriminately. Consequently, as the proportion of malicious nodes increases, the dissemination of false information in the recommendation sequence escalates, significantly impacting the final trust value. In contrast, the recommendation evaluation method discussed in this paper adjusts recommendation trust values based on various attributes after filtering the recommendation received.

7.3. Comparison with Similar Works

To assess the effectiveness of the proposed model against comparable models in the field, we selected CATM and CFFTM, both defined as trust models with recommendation defence in underwater networks. To the best of our knowledge, these two models are the most recent and thus provide a valid basis for comparison. The results of this analysis are shown in Figure 13 and Figure 14, representing the evaluation under ballot-stuffing and bad-mouthing attacks, respectively. The testing metrics are defined as follows:
  • Accuracy rate: the number of malicious nodes detected as a percentage of the total number of malicious nodes.
  • False detection rate: the number of misidentified normal nodes as a percentage of the total number of normal nodes.
First, we evaluate the average indirect trust of nodes under dishonest recommendation attacks. We aim to examine the average recommendation trust under varying proportions of dishonest recommenders. All models exhibit some resistance to dishonest recommendations when the proportion of malicious recommendations is small. However, as the proportion of malicious nodes increases, the performance of each model fluctuates in terms of resistance to dishonest recommendations. Specifically, in Figure 13a, under ballot-stuffing attacks, CATM shows less resistance as the percentage of dishonest recommenders in the network increases. This is because CATM’s validation process focuses on the performance of recommenders without accounting for the deception of recommender nodes. Conversely, CFFTM demonstrates high resistance to the attack due to its filtering technique, as explained earlier. Our proposed model provides more accurate detection of attacks, even as the percentage of malicious nodes increases. Figure 14a illustrates the resulting average indirect trust under bad-mouthing attacks in the same setting, where the nodes under attack are promoted as malicious. The proposed model excels in detecting dishonest recommendations, resulting in consistently high trust towards the evaluated nodes. As expected, an increase in the number of malicious nodes challenges all models’ ability to detect dishonest recommendations, yet the proposed model remains within a manageable range.
We evaluate the accuracy and false detection of each model as a function of the varied proportions of malicious recommendations. The proposed model demonstrates the highest resilience, maintaining near-perfect accuracy against up to 30% attack intensity for both ballot-stuffing (Figure 13b) and bad-mouthing attacks (Figure 14b). Beyond this point, there is a gradual decline, but the model still outperforms others significantly. In contrast, the CFFTM model, while initially accurate, shows a significant drop in performance even at lower levels of attack intensity. This is mainly attributed to the high false positive rate, where benign nodes are incorrectly considered malicious, as shown in Figure 13c and Figure 14c.

8. Conclusions

This paper proposed a novel recommendation evaluation process to detect dishonest recommendations within the context of trust establishment among IoUT nodes. The proposed method involves two main stages for evaluating recommendations, aiming to identify the most deceptive, dishonest recommenders. This approach incorporates both filtering and weighting strategies to establish the belief in each recommender. Firstly, a detection filtering mechanism is introduced, which combines outlier detection to identify recommendations that diverge from the norm with deviation mechanisms that favour personal experience. Secondly, a belief function is developed to weight recommendations based on social validation factors such as freshness, similarity, trustworthiness, and the decay of trust. Through case studies and simulations, we demonstrated the effectiveness and robustness of the proposed approach compared to current state-of-the-art methods.
In future work, we aim to investigate the relationship between the number of recommendations and the sparsity of the network in the context of detecting dishonest recommendations. This includes studying how network density and connectivity influence the effectiveness of recommendation-based trust models, with the goal of optimizing detection mechanisms for various IoUT configurations. Additionally, we aim to explore more sophisticated attack behaviours beyond bad-mouthing and ballot stuffing. A key direction for future research within the constraints of IoUT will be to explore integrating the trust model with energy-efficient routing protocols, enabling a comprehensive evaluation of its effects on energy consumption and network performance.

Author Contributions

Conceptualisation, A.A., X.C. and S.F.; methodology, A.A., X.C. and S.F.; validation, A.A., X.C. and S.F.; formal analysis, A.A.; investigation, A.A.; data curation, A.A.; writing—original draft preparation, A.A.; writing—review and editing, A.A., X.C. and S.F.; visualization, A.A., X.C. and S.F.; supervision, X.C. and S.F.; project administration, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data can be shared upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

iTrustor node
jTrustee node
kRecommender node
T i j Trust score obtained by trustor i, toward trustee j
T i k Trust score obtained by trustor i, toward recommender k
T k j Trust score obtained by recommender k, toward trustee j
IoUTInternet of Underwater Things
MATMUMobility-Aware Trust Model for IoUT
MADMedian Absolute Deviation
LOFLocal Outlier Factor
AquaSim-NGAqua simulation new generation
NS-3network simulators -3

References

  1. Tyagi, H.; Kumar, R.; Pandey, S.K. A detailed study on trust management techniques for security and privacy in IoT: Challenges, trends, and research directions. High-Confid. Comput. 2023, 3, 100127. [Google Scholar] [CrossRef]
  2. Abdul-Rahman, A.; Hailes, S. A distributed trust model. In Proceedings of the 1997 workshop on New Security Paradigms, Langdale, Cumbria, 23–26 September 1997; pp. 48–60. [Google Scholar]
  3. Yan, Z.; Zhang, P.; Vasilakos, A.V. A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 2014, 42, 120–134. [Google Scholar] [CrossRef]
  4. Fang, W.; Zhang, W.; Chen, W.; Pan, T.; Ni, Y.; Yang, Y. Trust-based attack and defense in wireless sensor networks: A survey. Wirel. Commun. Mob. Comput. 2020, 2020, 2643546. [Google Scholar] [CrossRef]
  5. Mohsan, S.A.H.; Mazinani, A.; Othman, N.Q.H.; Amjad, H. Towards the internet of underwater things: A comprehensive survey. Earth Sci. Inform. 2022, 15, 735–764. [Google Scholar] [CrossRef]
  6. Yisa, A.G.; Dargahi, T.; Belguith, S.; Hammoudeh, M. Security challenges of internet of underwater things: A systematic literature review. Trans. Emerg. Telecommun. Technol. 2021, 32, e4203. [Google Scholar] [CrossRef]
  7. Almutairi, A.; He, Y.; Furnell, S. A Multi-Level Trust Framework for the Internet of Underwater Things. In Proceedings of the 2022 IEEE International Conference on Cyber Security and Resilience (CSR), Virtual, 27–29 July 2022; pp. 370–375. [Google Scholar]
  8. Domingo, M.C. An overview of the internet of underwater things. J. Netw. Comput. Appl. 2012, 35, 1879–1890. [Google Scholar] [CrossRef]
  9. General Dynamics. Bluefin-21 Unmanned Underwater Vehicle (UUV). General Dynamics. 2023. Available online: https://gdmissionsystems.com/products/underwater-vehicles/bluefin-21-autonomous-underwater-vehicle (accessed on 20 March 2024).
  10. Valeport. MIDAS WLR Water Level Recorder; Teledyne Valeport Ltd: Totnes, UK, 2023; Available online: https://www.valeport.co.uk/products/midas-wlr-water-level-recorder/ (accessed on 20 March 2023).
  11. Lurton, X. An Introduction to Underwater Acoustics: Principles and Applications; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2. [Google Scholar]
  12. Alam, S.; Zardari, S.; Noor, S.; Ahmed, S.; Mouratidis, H. Trust management in social internet of things (SIoT): A survey. IEEE Access 2022, 10, 108924–108954. [Google Scholar] [CrossRef]
  13. Marche, C.; Nitti, M. Trust-related attacks and their detection: A trust management model for the social IoT. IEEE Trans. Netw. Serv. Manag. 2020, 18, 3297–3308. [Google Scholar] [CrossRef]
  14. Tan, S.; Li, X.; Dong, Q. Trust based routing mechanism for securing OSLR-based MANET. Ad Hoc Netw. 2015, 30, 84–98. [Google Scholar] [CrossRef]
  15. Khedim, F.; Labraoui, N.; Lehsaini, M. Dishonest recommendation attacks in wireless sensor networks: A survey. In Proceedings of the 2015 12th International Symposium on Programming and Systems (ISPS), Algiers, Algeria, 28–30 April 2015; pp. 1–10. [Google Scholar]
  16. Jiang, J.; Hua, S.; Han, G.; Li, A.; Lin, C. Controversy-adjudication-based trust management mechanism in the internet of underwater things. IEEE Internet Things J. 2022, 10, 2603–2614. [Google Scholar] [CrossRef]
  17. Iltaf, N.; Ghafoor, A.; Zia, U. A mechanism for detecting dishonest recommendation in indirect trust computation. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 189. [Google Scholar] [CrossRef]
  18. Muller, T.; Liu, Y.; Zhang, J. The fallacy of endogenous discounting of trust recommendations. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, Istanbul, Turkey, 4–8 May 2015; pp. 563–572. [Google Scholar]
  19. Hua, S.; Jiang, J.; Han, G. A lightweight Trust Management mechanism based on Conflict Adjudication in Underwater Acoustic Sensor Networks. In Proceedings of the 2021 Computing, Communications and IoT Applications (ComComAp), Shenzhen, China, 26–28 November 2021; pp. 258–262. [Google Scholar]
  20. Michiardi, P.; Molva, R. Core: A collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks. In Proceedings of the Advanced Communications and Multimedia Security: IFIP TC6/TC11 Sixth Joint Working Conference on Communications and Multimedia Security, Portorož, Slovenia, 26–27 September 2002; Springer: Boston, MA, USA, 2002; pp. 107–121. [Google Scholar]
  21. Buchegger, S.; Le Boudec, J.Y. Performance analysis of the CONFIDANT protocol. In Proceedings of the 3rd ACM International Symposium on Mobile ad Hoc Networking & Computing, Lausanne, Switzerland, 9–11 June 2002; pp. 226–236. [Google Scholar]
  22. Srinivasan, A.; Teitelbaum, J.; Wu, J. DRBTS: Distributed reputation-based beacon trust system. In Proceedings of the 2006 2nd IEEE International Symposium on Dependable, Autonomic and Secure Computing, Indianapolis, IN, USA, 29 September–1 October 2006; pp. 277–283. [Google Scholar]
  23. Zouridaki, C.; Mark, B.L.; Hejmo, M.; Thomas, R.K. E-Hermes: A robust cooperative trust establishment scheme for mobile ad hoc networks. Ad Hoc Netw. 2009, 7, 1156–1168. [Google Scholar] [CrossRef]
  24. Liang, W.; Long, J.; Weng, T.H.; Chen, X.; Li, K.C.; Zomaya, A.Y. TBRS: A trust based recommendation scheme for vehicular CPS network. Future Gener. Comput. Syst. 2019, 92, 383–398. [Google Scholar] [CrossRef]
  25. Mahmood, A.; Sheng, Q.Z.; Zhang, W.E.; Wang, Y.; Sagar, S. Toward a distributed trust management system for misbehavior detection in the internet of vehicles. ACM Trans. Cyber-Phys. Syst. 2023, 7, 1–25. [Google Scholar] [CrossRef]
  26. Huang, X.; Yu, R.; Kang, J.; Zhang, Y. Distributed reputation management for secure and efficient vehicular edge computing and networks. IEEE Access 2017, 5, 25408–25420. [Google Scholar] [CrossRef]
  27. Shabut, A.M.; Dahal, K.P.; Bista, S.K.; Awan, I.U. Recommendation based trust model with an effective defence scheme for MANETs. IEEE Trans. Mob. Comput. 2014, 14, 2101–2115. [Google Scholar] [CrossRef]
  28. Adewuyi, A.A.; Cheng, H.; Shi, Q.; Cao, J.; MacDermott, Á.; Wang, X. CTRUST: A dynamic trust model for collaborative applications in the Internet of Things. IEEE Internet Things J. 2019, 6, 5432–5445. [Google Scholar] [CrossRef]
  29. Du, J.; Han, G.; Lin, C.; Martínez-García, M. LTrust: An adaptive trust model based on LSTM for underwater acoustic sensor networks. IEEE Trans. Wirel. Commun. 2022, 21, 7314–7328. [Google Scholar] [CrossRef]
  30. Zhang, M.; Feng, R.; Zhang, H.; Su, Y. A recommendation management defense mechanism based on trust model in underwater acoustic sensor networks. Future Gener. Comput. Syst. 2023, 145, 466–477. [Google Scholar] [CrossRef]
  31. Cho, J.H.; Swami, A.; Chen, R. A survey on trust management for mobile ad hoc networks. IEEE Commun. Surv. Tutorials 2010, 13, 562–583. [Google Scholar] [CrossRef]
  32. Mohsan, S.A.H.; Li, Y.; Sadiq, M.; Liang, J.; Khan, M.A. Recent advances, future trends, applications and challenges of internet of underwater things (iout): A comprehensive review. J. Mar. Sci. Eng. 2023, 11, 124. [Google Scholar] [CrossRef]
  33. Almutairi, A.; Carpent, X.; Furnell, S. Towards a Mobility-Aware Trust Model for the Internet of Underwater Things. In Proceedings of the IFIP International Conference on ICT Systems Security and Privacy Protection, Edinburgh, UK, 12–14 June 2024; Springer: Cham, Switzerland, 2024; pp. 1–15. [Google Scholar]
  34. Wang, H.; Bah, M.J.; Hammad, M. Progress in outlier detection techniques: A survey. IEEE Access 2019, 7, 107964–108000. [Google Scholar] [CrossRef]
  35. Domingues, R.; Filippone, M.; Michiardi, P.; Zouaoui, J. A comparative evaluation of outlier detection algorithms: Experiments and analyses. Pattern Recognit. 2018, 74, 406–421. [Google Scholar] [CrossRef]
  36. Commerce, B.E.; Jøsang, A.; Ismail, R. The beta reputation system. In Proceedings of the 15th Bled Electronic Commerce Conference, Bled, Slovenia, 17–19 June 2002; Volume 84. [Google Scholar]
  37. Jøsang, A. A logic for uncertain probabilities. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2001, 9, 279–311. [Google Scholar] [CrossRef]
  38. Li, R.; Li, J.; Liu, P.; Kato, J. A novel hybrid trust management framework for MANETs. In Proceedings of the 29th IEEE International Conference on Distributed Computing Systems Workshops, Montreal, QC, Canada, 22–26 June 2009; pp. 251–256. [Google Scholar]
  39. Zouridaki, C.; Mark, B.L.; Hejmo, M.; Thomas, R.K. A quantitative trust establishment framework for reliable data packet delivery in MANETs. In Proceedings of the 3rd ACM Workshop on Security of Ad Hoc and Sensor Networks, Alexandria, VA, USA, 7 November 2005; pp. 1–10. [Google Scholar]
  40. Shabut, A.M.; Kaiser, M.S.; Dahal, K.P.; Chen, W. A multidimensional trust evaluation model for MANETs. J. Netw. Comput. Appl. 2018, 123, 32–41. [Google Scholar] [CrossRef]
  41. Martin, R.; Rajasekaran, S.; Peng, Z. Aqua-Sim Next generation: An NS-3 based underwater sensor network simulator. In Proceedings of the 12th International Conference on Underwater Networks & Systems, Halifax, NS, Canada, 6–8 November 2017; pp. 1–8. [Google Scholar]
  42. Campanile, L.; Gribaudo, M.; Iacono, M.; Marulli, F.; Mastroianni, M. Computer network simulation with ns-3: A systematic literature review. Electronics 2020, 9, 272. [Google Scholar] [CrossRef]
  43. Morozs, N.; Gorma, W.; Henson, B.T.; Shen, L.; Mitchell, P.D.; Zakharov, Y.V. Channel modeling for underwater acoustic network simulation. IEEE Access 2020, 8, 136151–136175. [Google Scholar] [CrossRef]
  44. Boyer, T.P.; García, H.E.; Locarnini, R.A.; Zweng, M.M.; Mishonov, A.V.; Reagan, J.R.; Weathers, K.A.; Baranova, O.K.; Paver, C.R.; Seidov, D. World Ocean Atlas 2018, Volume 3: Dissolved Oxygen, Apparent Oxygen Utilization, and Oxygen Saturation; NOAA National Centers for Environmental Information: Silver Spring, MD, USA, 2018. Available online: https://repository.library.noaa.gov/view/noaa/49137 (accessed on 20 March 2024).
  45. Yang, Z.; Li, L.; Gu, F.; Ling, X.; Hajiee, M. TADR-EAODV: A trust-aware dynamic routing algorithm based on extended AODV protocol for secure communications in wireless sensor networks. Internet Things 2022, 20, 100627. [Google Scholar] [CrossRef]
Figure 1. IoUT network structure.
Figure 1. IoUT network structure.
Futureinternet 16 00346 g001
Figure 2. The average trust scored for dishonest recommenders over time obtained by CATM.
Figure 2. The average trust scored for dishonest recommenders over time obtained by CATM.
Futureinternet 16 00346 g002
Figure 3. Proposed evaluation model.
Figure 3. Proposed evaluation model.
Futureinternet 16 00346 g003
Figure 4. Existing trust models.
Figure 4. Existing trust models.
Futureinternet 16 00346 g004
Figure 5. Confidence values for varying interactions.
Figure 5. Confidence values for varying interactions.
Futureinternet 16 00346 g005
Figure 6. Confidence values for varying interactions as function of expectations.
Figure 6. Confidence values for varying interactions as function of expectations.
Futureinternet 16 00346 g006
Figure 7. Penalty factor of trust as a function of λ .
Figure 7. Penalty factor of trust as a function of λ .
Futureinternet 16 00346 g007
Figure 8. Evaluation of belief ( B i j k ) over varied proportion of ballot-stuffing attack.
Figure 8. Evaluation of belief ( B i j k ) over varied proportion of ballot-stuffing attack.
Futureinternet 16 00346 g008
Figure 9. Evaluation of B i j k over varied proportion of bad-mouthing attack.
Figure 9. Evaluation of B i j k over varied proportion of bad-mouthing attack.
Futureinternet 16 00346 g009
Figure 10. Evaluation of the obtained indirect trust over trails (for each percentage, there are 115 data points (mean values), resulting in a total of 20,550 individual indirect trust values per attack type).
Figure 10. Evaluation of the obtained indirect trust over trails (for each percentage, there are 115 data points (mean values), resulting in a total of 20,550 individual indirect trust values per attack type).
Futureinternet 16 00346 g010
Figure 11. Evaluation of indirect trust obtained over time (solid line = average, shaded area = range from min to max across all nodes).
Figure 11. Evaluation of indirect trust obtained over time (solid line = average, shaded area = range from min to max across all nodes).
Futureinternet 16 00346 g011
Figure 12. Evaluation of average indirect trust values obtained for nodes under attack.
Figure 12. Evaluation of average indirect trust values obtained for nodes under attack.
Futureinternet 16 00346 g012
Figure 13. Analysis of effectiveness under ballot-stuffing attack.
Figure 13. Analysis of effectiveness under ballot-stuffing attack.
Futureinternet 16 00346 g013
Figure 14. Analysis of effectiveness under bad-mouthing attack.
Figure 14. Analysis of effectiveness under bad-mouthing attack.
Futureinternet 16 00346 g014
Table 1. Simulation parameters.
Table 1. Simulation parameters.
Simulation VariablesNode Variables
Simulation Time1800 sNumber of Nodes20
Surface Wind8.5–27.2 knotsInitial Energy(10,000–70,000) watt
Propagation modelRange-basedTransmission Range100, 120 m
Network VariablesRecommendation Variables
Routing ProtocolVector Forwarding d h 0.1
Data Rate1000 bps v h 2
Packet Size40 bytesAttack %10–50%
Carrier Frequency25 kHzRecommendation requesteach 60 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almutairi, A.; Carpent, X.; Furnell, S. Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things. Future Internet 2024, 16, 346. https://doi.org/10.3390/fi16090346

AMA Style

Almutairi A, Carpent X, Furnell S. Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things. Future Internet. 2024; 16(9):346. https://doi.org/10.3390/fi16090346

Chicago/Turabian Style

Almutairi, Abeer, Xavier Carpent, and Steven Furnell. 2024. "Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things" Future Internet 16, no. 9: 346. https://doi.org/10.3390/fi16090346

APA Style

Almutairi, A., Carpent, X., & Furnell, S. (2024). Recommendation-Based Trust Evaluation Model for the Internet of Underwater Things. Future Internet, 16(9), 346. https://doi.org/10.3390/fi16090346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop