Next Article in Journal
Methodology for Assessing Spatial Perception in Martial Arts
Previous Article in Journal
High-Performance Asphalt Binder Incorporating Trinidad Lake Asphalt and SBS Polymer for Extreme Climates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DIFshilling: A Diffusion Model for Shilling Attacks

1
College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Academy of Military Science, Beijing 100097, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3412; https://doi.org/10.3390/app15063412
Submission received: 22 February 2025 / Revised: 17 March 2025 / Accepted: 19 March 2025 / Published: 20 March 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Recommender systems (RSs) are widely used in various domains, such as e-commerce, social media, and online content platforms, to guide users’ decision-making by suggesting items that match their preferences and interests. However, these systems are highly vulnerable to shilling attacks, where malicious users create fake profiles to manipulate the recommendation results, thereby influencing users’ decisions. Such attacks can severely degrade the quality and reliability of recommendations, undermining the trust in RSs. A comprehensive understanding of shilling attacks is critical not only for improving the robustness of RSs but also for designing effective countermeasures to mitigate their impact. Existing shilling attack methods often face significant challenges in achieving both invisibility (i.e., making fake profiles indistinguishable from legitimate ones) and transferability (i.e., the ability to work across different RSs). Many current approaches either fail to capture the natural distribution of real user data or are highly tailored to specific RS algorithms, limiting their general applicability and effectiveness. To overcome these limitations, we propose DIFshilling, a novel diffusion-based model for shilling attacks. DIFshilling leverages forward noising and reverse denoising techniques to better model the distribution of real user data, allowing it to generate fake users that are statistically similar to legitimate users, thus enhancing the invisibility of the attack. Unlike traditional methods, DIFshilling is independent of the specific recommendation algorithm, making it highly transferable across various RSs. We evaluate DIFshilling through extensive experiments on seven different victim RS models, demonstrating its superior transferability. The experimental results show that DIFshilling not only achieves outstanding effectiveness in terms of attack success but also exhibits strong adversarial defense capabilities, effectively evading detection mechanisms. Specifically, in experiments conducted on the ML100K dataset with the DGCF victim model, DIFshilling was able to increase the frequency of the targeted item by a factor of 15 while maintaining the lowest detection precision and recall, illustrating its ability to remain undetected by common defense techniques. These results underscore the potential of DIFshilling as a powerful tool for both evaluating the vulnerabilities of RS and designing more robust countermeasures.

1. Introduction

Recommender systems play a crucial role in recommending popular and personalized items to users, which successfully addresses the issue of information explosion. These systems effectively cater to user preferences and provide tailored recommendations. For instance, YouTube, one of the world’s largest video platforms, leverages recommender systems [1] to offer users personalized video content. Recommender systems not only enhance user experiences but also increase merchants’ profits. Notably, Netflix, an American subscription video streaming service, has a recommender system that accounts for approximately 80% of the total streaming hours, resulting in annual savings of over USD 1 billion [2]. Given the importance of RSs, there is a natural motivation for attackers to corrupt it in pursuit of profits [3].
Many efforts have been devoted to studying how to deceive RS for promoting (or demoting) the targeted item’s ranking, such as unorganized malicious attacks (i.e., several attackers individually attack RS without an organizer) [4] and Sybil attacks (i.e., illegally infer a user’s preference) [5]. Among these attacks, the shilling attack is a widely employed and persistent attack against recommender systems [6]. The shilling attack is also known as the data poisoning attack [7] or the profile injection attack [8]. It involves the injection of unscrupulous “fake user profiles” into the ratings database with the aim of manipulating the system’s recommendations. For example, a random attack assigns random ratings to victim items selected arbitrarily in fake user profiles. Similarly, AUSH [9] utilizes specialized objective functions for generators and discriminators to generate fake user profiles. Understanding these attacks is crucial for improving defenses and ensuring the trustworthiness of RSs. However, these existing shilling attack models suffer from certain drawbacks.
Existing shilling attack methods have notable drawbacks that hinder their effectiveness. One major limitation is the lack of attack invisibility, as many of these methods fail to consider personalization, resulting in the generation of fake user profiles that do not exhibit the behavioral characteristics of actual users in RS. Consequently, these fake users are easily detectable and filtered out before they can significantly impact the ranking of targeted items. Another drawback is the limited transferability of targeted models, as some kinds of shilling attacks are designed for specific RSs. For instance, average attacks and bandwagon attacks are more effective for user-based KNN recommendation algorithms in collaborative filtering but less effective for item-based KNNs [10]. Thus, these tailored attack methods cannot achieve satisfactory results in scenarios where the targeted RS are unavailable or differ in their underlying algorithms.
To address the above challenges, we propose DIFshilling, a novel shilling attack method based on the diffusion model [11]. Originating in physics, the diffusion model has demonstrated great potential in image generation [12] when combined with neural networks. Leveraging the diffusion model’s ability to fit actual data distributions accurately, DIFshilling generates fake user profiles that closely resemble real users. This approach enhances the invisibility of the shilling attack, making it challenging to identify and filter out fake user profiles. The training process of DIFshilling is independent of the specific RS but on the real user dataset, so it can enhance the transferability of the shilling attack for many different victimized RS. To ensure that the generated fake user profiles contain the features of real users and improve the stealthiness of the shilling attack, we add unique designs in DIFshilling. First, we filter out inactive real users and utilize their enriched characteristics as templates. Then, we gradually introduce Gaussian noise to the user–item interaction matrix at each step during the noising phase, resulting in a noised matrix. In the denoising stage, we employ a neural network model to remove noise from the noised matrix progressively to gain fake user profiles. It is worth noting that, unlike in computer vision applications, preserving the personality of real users is essential in RS. Therefore, we carefully control the noise schedule during the noising process and denoise the noised matrix obtained rather than using random Gaussian noise.
In summary, our contributions are as follows:
(1)
We propose DIFshilling, a novel shilling attack method for RSs, based on the diffusion model. DIFshilling leverages the power of diffusion processes to generate fake user profiles that closely resemble real user behavior, significantly enhancing the stealthiness and effectiveness of the attack.
(2)
DIFshilling addresses the issue of low invisibility in traditional shilling attacks. By accurately modeling the data distribution of genuine users, DIFshilling generates fake profiles that are more difficult to detect, making it harder for defensive systems to filter out malicious user profiles.
(3)
DIFshilling is designed to be independent of the targeted RS. This design improves the transferability of the attack, making it effective across a wide range of victim models without requiring prior knowledge of the targeted system’s algorithm.
(4)
Our extensive experiments on five datasets and seven victim RSs demonstrate the outperformance of DIFshilling over eight other baselines, effectively promote the ranks of targeted items, and demonstrate resilience against defense mechanisms.
This paper is organized as follows: Section 2 reviews existing research on shilling attacks. Section 3 presents our proposed threat model, outlining the attacker’s goals, knowledge, and capabilities. Section 4 details the application of the diffusion model to develop our novel approach to shilling attacks. Finally, Section 5 presents comprehensive experimental results to validate the effectiveness of our approach.

2. Related Work

Recommender systems (RSs) have become integral to various online platforms, providing personalized suggestions that enhance user experience and drive commercial success. However, their widespread adoption has also exposed them to significant vulnerabilities, particularly adversarial attacks [13] and shilling attacks [6]. While adversarial attacks are beyond the scope of this paper, we focus on shilling attacks, which involve injecting fake user profiles to manipulate recommendation outcomes. These attacks can degrade recommendation quality, undermine user trust, and cause substantial economic consequences [14]. Addressing these risks is crucial for ensuring the robustness and reliability of RSs. Shilling attacks can be broadly categorized into algorithm-agnostic and algorithm-specific approaches. Recently, generative adversarial networks (GANs) [15] have been increasingly utilized to enhance the effectiveness of shilling attacks, enabling the generation of more sophisticated user profiles that are harder to detect. We review all these categories in detail and show the baselines compared with our approach in the experiments in Table 1.

2.1. Algorithm-Agnostic Shilling Attacks

Algorithm-agnostic attacks do not rely on knowledge of an RS algorithm. These attacks exploit RS’ reliance on user behavior data by generating fake yet seemingly normal data to manipulate recommendation outcomes. As they are not tailored to specific algorithms, they exhibit strong adaptability and can threaten a wide range of RSs [22]. For instance, random attacks [16,17] assign ratings to items randomly based on a normal distribution with mean and variance parameters matching those of the entire system’s ratings. Average attacks [16] apply a similar approach but use parameters derived from the ratings of a sampled item set. Segment attacks [16] involve assigning maximal ratings to selected items and minimal ratings to others, targeting specific segments within the RS. Bandwagon attacks [16,18] exploit item popularity by assigning maximal ratings to popular items, while other ratings are assigned randomly, similar to the random attack. Because algorithm-agnostic shilling attacks are independent of any specific recommendation algorithm, they are highly versatile and pose a threat to various RSs. Given that attackers can employ different strategies, such as random attacks, average attacks, and bandwagon attacks, the detection of these attacks requires a comprehensive analysis of multiple behavioral characteristics and patterns [23].

2.2. Algorithm-Specific Shilling Attacks

Algorithm-specific attacks target particular recommendation algorithms, leveraging their internal mechanisms or data structures to achieve more effective and efficient manipulations. Compared to algorithm-agnostic methods, these attacks require a deeper understanding of the target recommendation system, making them more precise and potentially more destructive. These attacks have been developed for a range of systems, including graph-based [24], association rule-based [25], matrix factorization-based [26,27], and neighborhood-based [28] recommender systems. For example, O’Mahony et al. [18] studied the robustness of user-based collaborative filtering (CF) methods [29] by injecting fake users. Burke et al. [30] analyzed the impact of low-knowledge attack strategies on CF methods, aiming to promote or demote items. Seminario and Wilson [31] developed power user/item attack models that leveraged influential users/items to manipulate the RS. Fang et al. [24] explored shilling attacks in graph-based CF models, and Yang et al. [25] demonstrated the practical feasibility of attacking real-world RSs such as YouTube, Google Search, Amazon, and Yelp. Algorithm-specific attacks are highly targeted and can inflict significant damage on RSs. Since these attacks exploit the algorithm’s internal mechanisms, effective defense strategies must consider both the underlying data structures and computational frameworks. Certain attack methods, such as optimization-based approaches, generate adversarial users that are difficult to detect using traditional identification techniques [32].

2.3. GAN-Based Shilling Attacks

GAN-based shilling attacks exploit the powerful generative capabilities of GANs to create fake user profiles or ratings, manipulating RS outcomes. Through adversarial training between a generator and a discriminator, GANs can synthesize realistic fake user profiles, which are injected into the RS to execute the attack [33]. The generator’s objective is to produce fake data that are statistically similar to real user profiles or ratings, making it difficult for detection systems to identify them. Meanwhile, the discriminator is trained to distinguish between real and generated data. During training, the generator continuously improves its ability to generate indistinguishable data while the discriminator simultaneously enhances its detection accuracy. Recent studies have also integrated GANs for shilling attacks. Christakopoulou et al. [20] modeled shilling attacks as a general-sum game between the RS and fake user generators, utilizing DCGAN [34] to generate fake user profiles. AUSH [9] trained its generator and discriminator using a combination of reconstruction loss, shilling loss, and adversarial loss to consider user segments, attack cost, and detectability. Leg-UP [21] extended AUSH by applying more direct loss functions and leveraging a surrogate model, further enhancing attack transferability and invisibility. Defense mechanisms against GAN-based shilling attacks can be broadly categorized into detection-based defenses, robust learning algorithms, and adversarial training [23]. Detection-based defenses aim to identify and filter out malicious profiles using anomaly detection and behavioral analysis, though increasingly sophisticated attacks challenge their effectiveness. Robust learning algorithms enhance RS resilience by integrating noise-resistant and anomaly-aware techniques, mitigating the impact of adversarial inputs. Adversarial training further strengthens defenses by exposing models to attack-like patterns during training, improving their ability to recognize and counter manipulated data. A combination of these approaches enhances the overall robustness of RS against evolving adversarial threats.

3. Threat Model

Figure 1 illustrates the threat model of a shilling attack. In this attack, a malicious actor creates fake user profiles with fraudulent objectives and inserts them into the user–item interaction data of the targeted RS, referred to as the victim model. As a result, the RS generates recommendation lists based on these manipulated interactions using its specific recommendation algorithms or models. The generated lists are then returned to the legitimate users of the system. We present the threat model from three perspectives:
Attacker’s Goal: The objective of a shilling attack can be either untargeted or targeted. Untargeted attacks aim to degrade the overall usefulness of an RS by forcing it to make inaccurate recommendations. Targeted attacks, which are the focus of this study, seek to influence the ranking of a specific item, either by increasing or decreasing its likelihood of being recommended. This work focuses primarily on push attacks, where the attacker attempts to increase the probability of a targeted item being recommended. Nuke attacks, which are designed to reduce the likelihood of a targeted item being recommended, can be seen as the inverse of push attacks and could utilize similar techniques.
Attacker’s Knowledge: We assume that the attacker has partial knowledge of the dataset used to train the victim model. This assumption is plausible, as attackers can gather user feedback through various means, such as web scraping or collecting data from platforms, which include user interactions like likes and shares. However, it is crucial to note that the attacker does not have direct access to the inner workings of the victim model, such as its parameters or architecture. In practice, the attacker cannot access the RS itself.
Attacker’s Capability: As depicted in Figure 1, the attacker injects fake user profiles into the training set of the victim model. The attack succeeds when the victim model incorporates these profiles into its training process. This indicates that the attack occurs during the training phase of an RS rather than during the testing phase. A test phase attack would involve compromising the accounts of real users and altering their preferences, which lies outside the scope of this work and pertains more to cybersecurity concerns.

4. The Design of DIFshilling

4.1. Overview of DIFshilling

DIFshilling is designed based on the diffusion model framework, leveraging the success of diffusion models in accurately fitting real data distributions. Figure 2 illustrates the framework of DIFshilling. Firstly, DIFshilling identifies and filters out less-active users who rarely rate items in the dataset. These users are not conducive to the generation of fake user profiles, lacking enough characteristics to represent distributions. Then, the remaining active users are passed for the diffusion process. In the diffusion process, random Gaussian noise is incrementally added to the active user–item interaction matrix during the noising process. A neural network model is trained to predict the noise. Unlike in computer vision, it is necessary to safeguard the personalized characteristics of real users. Therefore, the noise schedule is restricted within a specific range. During the denoising phase, DIFshilling utilizes the previously trained model to predict the noise and progressively removes it to obtain a fake rating matrix. Importantly, to protect real users’ personalized features, we utilize the x t obtained during the noising phase as the initial point for the denoising stage rather than using random Gaussian noise.

4.2. The Noising Process

The noising process corrupts the original data by gradually adding Gaussian noise. Following the filter, we obtain the rating matrix and consider it as the initial point x 0 . The noising process consists of a total of T steps. Each step adds Gaussian noise to the data from the previous step as follows:
x t = α t x t 1 + 1 α t Z t , t { 1 ,   ,   T } ,
where Z t N ( 0 ,   I ) represents standard Gaussian noise, while α t = 1 β t . The noise schedule β t ( 0 ,   1 ) is a hyperparameter controlling the amount of noise added at step t. Typically, as the step progresses, more noise is added. Therefore, the noise schedule β t follows the rule β 1 < β 2 < < β T . However, in order to preserve the personality of real users, we limit the noise schedule to a certain extent. Since each step of x t is derived from the previous step x t 1 , the entire process can be viewed as a Markov chain. Therefore, x t at each step can be calculated from x 0 based on the properties of the Markov chain:
x t = α t ¯ x 0 + 1 α t ¯ Z t , where α t ¯ = s = 0 t α s .
Meanwhile, we can derive the intermediate and final distribution of the noising process as follows:
q x t x t 1 : = N x t ;   1 β t x t 1 ,   β t I ,
q x 1 : T x 0 = t = 1 T q x t x t 1 = N x t ;   α ¯ t x 0 ,   1 α ¯ t I .
During the noising phase, the objective is to train a deep neural network to predict the real noise added to the data. As the neural network’s primary objective is to predict the noise at each step rather than restoring the original data, we utilize the mean square error (MSE) as the loss function for training the network. Thus, we define the loss function of this deep neural network as follows:
L l o s s = Z t Z t ^ 2 2 ,
where Z t is the real noise added to the data, and Z t ^ is the noise predicted by the neural network. Compared with other GAN-based shilling attack methods’ loss functions [9,19,21], our loss function is simpler and converges faster. Additionally, employing a more straightforward loss function avoids the need to design complex loss functions for shilling purposes.

4.3. The Denoising Process

The noising process adds noise to the original data, while the denoising process aims to strip the noise out. Suppose we know the actual distribution q x ^ t 1 x ^ t for each step, then starting from a random noise x ^ T N 0 ,   I and gradually denoising, we can generate an actual sample with the same distribution as the original. However, as the noise at each step is not accurately predicted, it adds uncertainty to the denoising process, which is why the new data generated by the diffusion model have a rich diversity in matching the original data. Unlike in computer vision [35], if we start from random Gaussian noise, we will lose the personalized characteristics of real users. In order to prevent it, we set the noise x T obtained from the noising process as the beginning of the reverse process x T ^ .
We construct a neural network model to estimate the distribution q x ^ t 1 x ^ t . Since the data at the current moment are only related to the data at the last step, the reverse process can also be regarded as a Markov chain composed of a series of Gaussian distributions parameterized by the neural network. Thus, these distributions can be expressed as follows:
p θ x ^ t 1 x ^ t = N x ^ t 1 ;   μ θ x ^ t , t ,   Σ θ x ^ t , t ,
p θ x ^ 0 : T = p x ^ T t = 1 T p θ x ^ t 1 x ^ t ,
where μ θ x ^ t ,   t and Σ θ x ^ t ,   t are the mean and variance provided by the neural network, respectively.
Although the distribution q x ^ t 1 x ^ t is not directly computable, the conditional posterior q x ^ t 1 x ^ t ,   x 0 is workable as follows:
q x ^ t 1 x ^ t ,   x 0 = N x ^ t 1 ;   μ ˜ x ^ t ,   x 0 ,   σ ˜ t I ,
where μ ˜ x ^ t ,   x 0 is the estimated mean based on x ^ t and x 0 . From the Bayesian rules, q ( x ^ t 1 | x ^ t ,   x 0 ) can be formulated as follows:
q x ^ t 1 x ^ t ,   x 0 = q x ^ t x ^ t 1 ,   x 0 q x ^ t 1 x 0 q x ^ t x 0 .
From the properties of the Markov chain, q ( x ^ t | x ^ t 1 ,   x 0 ) can be formulated as follows:
q x ^ t x ^ t 1 ,   x 0 = q x ^ t x ^ t 1 = N x ^ t ;   1 β t x ^ t 1 ,   β t I .
Taking Equation (4) into account, we can obtain the following:
q x ^ t 1 x 0 = N x ^ t 1 ;   α ¯ t 1 x 0 ,   1 α ¯ t 1 I ,
q x ^ t x 0 = N x ^ t ;   α ¯ t x 0 ,   1 α ¯ t I .
Combining Equations (10)–(12) and the normal distribution density function, we can modify Equation (8) as follows:
q x ^ t 1 x ^ t ,   x 0 = exp ( 1 2 ( ( α t β t + 1 1 α ¯ t 1 ) x ^ t 1 2 ( 2 α t β t x ^ t + 2 α ¯ t 1 1 α ¯ t 1 x 0 ) x ^ t 1 + C ( x ^ t ,   x 0 ) ) ) ,
where C ( x ^ t ,   x 0 ) is irrelevant with x ^ t 1 , so we can ignore it. According to the definition of the probability density function of the Gaussian distribution and Equation (13), we can derive the mean and variance of the posterior distribution q x ^ t 1 x ^ t ,   x 0 as follows:
μ ˜ x ^ t ,   x 0 = α t β t x ^ t + α ¯ t 1 1 α ¯ t 1 x 0 / α t β t + 1 1 α ¯ t 1 = 1 α t ( x ^ t β t 1 α t ¯ Z t ^ ) ,
σ ˜ t = 1 / α t β t + 1 1 α ¯ t 1 = 1 α ¯ t 1 1 α ¯ t · β t .
In Equation (15), the variance σ t ˜ is quantitative due to the fixed parameters in the noising process. However, the value of the mean μ ˜ x ^ t ,   x 0 related to x ^ t and the noise Z t ^ is invisible for the denoising process. Hence, we must use the neural network trained in the noising phase to predict the noise Z t ^ . After the noising and denoising process, we can get the fake user–item interaction matrix x 0 ^ that matches the real user distribution. However, to improve the ranking of the targeted item, we need to additionally set the rating represented by the targeted item to the maximum (i.e., in our experiments, we set the rating of targeted items as 5). It is also a common approach to other shilling attack methods. We summarize the algorithms of the noising and denoising process on Algorithm 1.
Algorithm 1 Noising and denoising process.
Input: 
active users’ interaction x 0 ; noise steps T; noise schedule { β 1 ,   β 2 ,   ,   β T } ; randomly initialized neural network θ
Output: 
fake users’ interaction x ^ 0
  1:
t 0
  2:
x t q ( x 0 )
  3:
while  t < T  do
  4:
     t t + 1
  5:
     ϵ N ( 0 ,   I )
  6:
     x t β t x t 1 + 1 β t ϵ
  7:
     θ ϵ ϵ θ α ¯ t x 0 + 1 α ¯ t ϵ ,   t 2
  8:
end while
  9:
End of Noising Process
10:
Start Denoising Process
11:
x ^ T x T
12:
t T
13:
while  t > 0  do
14:
     t t 1
15:
     μ t 1 1 α t x ^ t β t 1 α ¯ t ϵ θ ( x ^ t ,   t )
16:
     x ^ t 1 μ t 1 + σ t ˜ Z t ^
17:
end while
18:
return  x ^ 0

4.4. Complexity Analysis of DIFshilling

Evaluating the computational complexity of DIFshilling is essential for assessing its feasibility in the real world. Since diffusion-based models often introduce additional computational overhead, we analyze both the time and space complexity of the proposed approach, focusing on the noising and denoising processes.
Time Complexity: The noising process consists of T steps, where each step adds Gaussian noise to the user–item interaction matrix x t . The primary computational cost at each step arises from matrix operations, including element-wise multiplications and additions, which have a time complexity of O ( n m ) . Here, n and m denote the number of users and items, respectively. Consequently, the total complexity of the noising process is O ( T n m ) . Similarly, the denoising process follows the same iterative structure with T steps. At each step, a neural network predicts the noise term Z t ^ , requiring a forward pass through a deep model. Assuming the neural network has L layers, with each layer containing at most h hidden units, the complexity of a single forward pass is O ( h 2 ) . Given that noise estimation occurs at each of the T steps, the total complexity of the denoising process is O ( T h 2 + T n m ) . Thus, the overall time complexity of DIFshilling is as follows:
O ( T ( n m + h 2 ) ) .
Space Complexity: The primary memory consumption in DIFshilling arises from storing the user–item interaction matrix and neural network parameters. The matrix x t requires O ( n m ) space, while the neural network parameters occupy O ( L h 2 ) space, assuming fully connected layers. Additionally, intermediate variables for backpropagation introduce an extra storage overhead proportional to the batch size B, yielding a total space complexity of the following:
O ( n m + L h 2 + B h 2 ) .
Compared to traditional GAN-based shilling attacks, DIFshilling incorporates a diffusion-based mechanism that necessitates an iterative refinement process. While this increases computational cost due to multiple inference steps, it enhances attack stealthiness and flexibility. Optimizations such as reducing the number of noise steps T or employing model compression techniques can alleviate computational overhead while preserving attack efficacy.

5. Experiments

In this section, we present comprehensive experiments conducted on five datasets, seven victim RS models, and eight baseline methods to address the following research questions:
  • RQ1: Does DIFshilling outperform state-of-the-art shilling attack methods across various victim RS models?
  • RQ2: Is DIFshilling more challenging for detectors to identify compared to other shilling attack methods?
  • RQ3: How does the attack size influence the performance of DIFshilling and its detection by anti-shilling mechanisms?
  • RQ4: How do the forward noising and reverse denoising components contribute to DIFshilling’s effectiveness?

5.1. Experimental Setup

The benchmark datasets used in our experiments are as follows: ML100K (https://grouplens.org/datasets/movielens/100k/ (accessed on 21 February 2025)), FilmTrust (https://github.com/guoguibing/librec/tree/3.0.0/data/filmtrust (accessed on 21 February 2025)), Amazon Automotive, Amazon Fashion (https://nijianmo.github.io/amazon/index.html (accessed on 21 February 2025)) and Book-crossing (https://www.kaggle.com/datasets/somnambwl/bookcrossing-dataset) (accessed on 21 February 2025). To ensure a comprehensive evaluation, we selected datasets with diverse characteristics. While ML100K and FilmTrust are popular benchmarks, their small and dense nature makes them less representative of real-world recommender system scenarios. Therefore, we included Amazon Fashion and Book-crossing datasets, which are significantly larger and sparser, to better assess the practicality of DIFshilling. The dataset statistics are provided in Table 2. To filter out cold-start and inactive users, thresholds based on user activity were applied, as shown in Table 2. Each dataset was randomly split into training and testing sets in a 9:1 ratio. Targeted items for shilling attacks were randomly selected, with a default attack size of approximately 1% of each dataset’s real users.
The victim RS models evaluated in our experiments include BPR [36], DGCF [37], DMF [38], GCMC [39], NCL [40], NeuMF [41], and NGCF [42]. These models have been widely adopted in both research and engineering for their effectiveness in various recommendation tasks. A brief summary of their key features is provided below. BPR introduces a maximum posterior estimator derived from Bayesian analysis for personalized ranking optimization, utilizing stochastic gradient descent with bootstrap sampling. DGCF enhances collaborative filtering by disentangling user intents within user–item interactions, thereby refining intent-aware interaction graphs and representations to achieve superior performance. DMF integrates a neural network architecture with a user–item matrix composed of explicit ratings and non-preference implicit feedback to learn a unified low-dimensional space for user and item representations, utilizing a binary cross-entropy-based loss function for enhanced optimization. GCMC employs a graph auto-encoder framework with differentiable message passing on a bipartite user–item graph for matrix completion in recommender systems, effectively utilizing additional structured data such as social networks. NCL enhances graph collaborative filtering by explicitly incorporating potential neighbors into contrastive pairs through a novel structure-contrastive objective for structural neighbors and a prototype-contrastive objective for semantic neighbors, thereby addressing data sparsity and outperforming traditional methods that rely on random sampling. NeuMF introduces a neural network-based framework for collaborative filtering by replacing the traditional inner product in matrix factorization with a neural architecture that leverages a multi-layer perceptron to learn the user–item interaction function from implicit feedback data. NGCF incorporates user–item interactions through bipartite graph structures into the embedding process, effectively propagating embeddings to capture high-order connectivity and explicitly inject collaborative signals. We test all these different victim RS models to prove the transferability of DIFshilling.
The hyperparameters for these models, including hidden layers and learning rates, were configured according to the default settings provided in the RecBole framework [43]. RecBole is a comprehensive library for recommendation system research, supporting 91 algorithms across four categories: general, sequential, context-aware, and knowledge-based recommendations. It provides a unified and efficient platform for algorithm development and reproduction. For DIFshilling, the diffusion model parameters were set as follows: a learning rate of 0.001, 10 noise steps (T), and a noise schedule ( β ) ranging from 10 8 to 10 7 . A multilayer perceptron was employed to predict noise during the denoising process. The MLP was optimized using the Adam optimizer with a batch size of 512 to ensure stable training and convergence. To enhance invisibility, DIFshilling employs a gradual refinement strategy, where synthetic attack profiles undergo iterative denoising to align with real user behavior patterns.

5.2. Evaluation Metric

Evaluation metrics commonly used in RS, such as hit ratio (HR) and normalized discounted cumulative gain (NDCG), primarily measure the recommendation performance of RS. However, with slight modifications, these metrics can be adapted to quantify the effectiveness of shilling attacks. Let U denote the set of users, and K represent the number of items in the recommendation list R L . Under these conditions, HR evaluates whether the targeted item i t appears in R L . For each user u U , the hit function is defined as follows:
h i t ( u ) = 1 ,   i t R L 0 ,   i t R L .
The overall H R @ K is then calculated as follows:
H R @ K = 1 | U | u U h i t ( u ) .
NDCG can be adapted to account for the ranking of the targeted item. The modified formula is expressed as follows:
N D C G @ K = 1 | U | u = 1 | U | 1 log 2 ( p u + 1 ) ,
where p u denotes the rank of the targeted item in R L for the u-th user. The term log 2 ( p u + 1 ) introduces logarithmic scaling based on the ranking position, assigning higher weights to items ranked closer to the top. If the targeted item is not present in R L , p u is considered infinite ( p u = ), resulting in an NDCG value of 0.

5.3. Baseline Methods

We compare the performance of DIFshilling with eight existing shilling attack methods. The attack methods we consider are as follows:
(1)
A random attack assigns a rating r N ( μ ,   σ ) to an item, where μ and σ are the mean and the variance of all ratings in the system, respectively.
(2)
An average attack assigns a rating r N ( μ ,   σ ) to an item, where μ and σ correspond to the mean and variance of ratings from a sampled set of items within the system.
(3)
A segment attack assigns maximal ratings to selected items and minimal ratings to all others.
(4)
A bandwagon attack utilizes the most popular items as selected targets, assigning maximal ratings to them while assigning ratings to other items in the same manner as a random Attack.
(5)
AIA [19] stands for adversarial injection attack, which builds a bilevel optimization framework to generate fake user profiles by maximizing the attack objective on the surrogate model.
(6)
DCGAN [20] is a GAN adopted in a recent shilling attack method, where the generator takes noise and outputs fake user profiles through convolutional units. We follow the default settings in [20].
(7)
AUSH [9] constructs reconstruction loss, shilling loss, and adversarial loss to train the generator and discriminator, respectively, considering users in the segment, attack cost, and detectability.
(8)
Leg-UP [21] extends AUSH for attack transferability and invisibility by applying more direct loss functions and leveraging the surrogate model.
In all methods, the highest rating is assigned to the targeted item. The effectiveness of attacks is evaluated on the test set using HR@K and NDCG@K at K = 10 . In the push attack scenario, higher HR@K and NDCG@K values indicate greater attack effectiveness.

5.4. Attack Performance (RQ1)

Figure 3 presents heatmaps showing the overall attack performance of various shilling methods across victim RS models and datasets. Concrete values of HR@10 and NDCG@10 are provided in Table 3 and Table 4 for the ML100K and Fashion datasets, respectively, while results for other datasets follow similar patterns. In Figure 3, each cell represents the HR@10 value for a specific attack method targeting a victim RS on a dataset, with lighter colors indicating higher HR@10 values. From the heatmaps, we observe that DIFshilling consistently achieves superior attack performance across all victim RS models and datasets. Examining the rows of the heatmap, it is evident that DIFshilling performs particularly well against DMF and GCMC on the three smaller datasets in the first row. Conversely, for the two larger datasets in the second row, DIFshilling notably increases the frequency of targeted items appearing in the recommendation lists for NeuMF and NGCF. The heatmaps also allow us to differentiate between heuristic-based and GAN-based shilling attack methods, with the dividing line being the row where DIFshilling is located. A comparative analysis reveals that GAN-based methods generally outperform heuristic-based methods against victim RS models. Furthermore, the color bands in the heatmaps highlight that shilling attacks are more effective on smaller and denser datasets, while their impact diminishes on larger and sparser datasets.
From Table 3 and Table 4, we note that shilling attack methods, including DIFshilling, have a more pronounced effect on smaller and denser datasets. On the ML100K dataset, DIFshilling consistently delivers the best attack performance on all victim RS models, as reflected by both HR@10 and NDCG@10. DIFshilling significantly increases the frequency of targeted items appearing in recommendation lists and improves their rankings. For example, when DGCF is the victim RS, DIFshilling increases the average appearance frequency of targeted items by a factor of 15 compared to other attack methods. Similarly, when GCMC is the victim RS, DIFshilling improves the rank of targeted items in the recommendation list by three positions compared to the best alternative shilling methods. Table 4 summarizes the experimental results on the larger and sparser Fashion dataset. DIFshilling maintains its superior performance on most victim RS models, outperforming other methods. However, it is worth noting that the impact of shilling attacks diminishes on large and sparse datasets. For instance, in some cases, such as when the victim RS is GCMC, and the attacker is AUSH, DIFshilling fails to improve the hit ratio of targeted items in recommendation lists.
These results underscore the effectiveness of DIFshilling, particularly in small and dense datasets, while also highlighting the challenges of executing effective shilling attacks in large and sparse environments. More importantly, DIFshilling consistently outperforms existing shilling attack methods across diverse recommender system models and datasets, demonstrating its strong transferability. Unlike heuristic-based and GAN-based attack methods, which exhibit varying levels of effectiveness depending on the dataset and victim model, DIFshilling maintains high attack performance across different RS models and dataset characteristics. This suggests that DIFshilling is not only adaptable to different recommendation environments but also robust in achieving consistent attack success, further validating its transferability across various settings.

5.5. Anti-Detection (RQ2)

To evaluate the quantitative invisibility of DIFshilling in realistic scenarios, we employ a state-of-the-art unsupervised detection technique [44] to identify fake user profiles generated by various shilling attack methods. Figure 4 presents the precision and recall values of the detector in identifying fake profiles, where lower precision and recall values indicate greater difficulty in detecting the attacks. As shown in Figure 4, DIFshilling consistently outperforms other attack methods in evading detection across most scenarios, exhibiting the lowest precision and recall values for fake profile identification on the ML100K and FilmTrust datasets. These results indicate that DIFshilling generates more stealthy fake users, which are significantly harder for detection mechanisms to distinguish. However, we observe that detection performance declines on sparser datasets, such as Automotive and Book-crossing, where precision and recall values approach zero for most attack methods, suggesting that dataset sparsity also plays a role in attack detectability.
To further investigate the qualitative invisibility of DIFshilling, we apply principal component analysis (PCA) to visualize the distributions of real and fake user profiles in the latent space. Since the visualization patterns are consistent across datasets, we present the results for the ML100K dataset in Figure 5. Analyzing Figure 5, we observe that the fake profiles generated by DIFshilling closely approximate the distribution of real users while maintaining a greater dispersion in the latent space. In contrast, other attack methods, particularly heuristic-based ones, tend to cluster tightly in localized areas, making them more susceptible to detection by defense mechanisms. The diffusion model used in DIFshilling effectively preserves the natural variability of real user behavior, enabling it to generate more realistic and indistinguishable fake profiles. This ability to blend into the distribution of real users further enhances DIFshilling’s invisibility, making it significantly harder to detect compared to traditional shilling attack methods.

5.6. Effects of Attack Size (RQ3)

As the number of fake users increases, the recommender system is generally expected to be more affected, but the attack also becomes more detectable. This section investigates the trade-off between attack effectiveness and the system’s ability to detect the attack. To facilitate experimentation across various datasets, we varied the attack size by inserting fake user profiles at 1%, 3%, 5%, and 10% of the original dataset size.
Figure 6a–c shows the attack performance of DIFshilling on each victim RS when different proportions of fake user profiles are inserted into the ML100k, FilmTrust, and Automotive datasets. As expected, the attack effect becomes more pronounced with the increase in the percentage of fake users. However, there are instances where this pattern does not hold. Notably, for the DMF model on the FilmTrust dataset, the attack performance peaks when fake user profiles account for 5% of the dataset. This anomalous result may be due to the noising and denoising process, which operates on the attack size scale of real users without finer granularity. The addition of Gaussian noise might introduce extraneous information, diminishing attack performance as the number of fake profiles grows. This effect is likely influenced by both the specific recommendation model and the algorithm used.
We further assess the performance of the detector in identifying inserted fake user profiles at various attack sizes. As shown in Figure 6d, the precision and recall of the detector increase with the percentage of inserted fake profiles. However, when the fake user profile insertion rate reaches approximately 5%, both precision and recall begin to decrease, reaching their lowest values. Additionally, we analyze the distribution of real and fake user profiles in the latent space for different attack sizes, as shown in Figure 7. The results of the FilmTrust dataset are presented in this figure, highlighting how the fake profiles’ distribution evolves as the attack size changes.

5.7. Ablation Study (RQ4)

To evaluate the necessity and effectiveness of each component in DIFshilling, we conduct an ablation study by isolating the forward noising and reverse denoising mechanisms. Specifically, we analyze the attack performance and adversarial robustness of two DIFshilling variants:
  • Forward Process Only: This variant applies the noise-adding step without incorporating any learning or reverse denoising, generating fake users solely through random perturbations.
  • Backward Process Only: This variant begins with predefined Gaussian noise and applies the reverse denoising process to recover fake user profiles, without explicitly injecting noise in the forward phase.
We evaluate these variants on two datasets (ML100K and FilmTrust) using seven victim RS models, assessing both attack performance and adversarial robustness. The results are presented in Table 5 and Figure 8. The results from Table 5 indicate that the forward process only variant leads to a more pronounced decline in attack performance compared to backward process only. For instance, in ML100K, SHR@10 drops from 5.4083 in DIFshilling to 1.3786 for BPR, with similarly low values observed across other models. This suggests that noise addition alone is insufficient to generate effective attack profiles. Conversely, backward process only demonstrates moderate attack effectiveness (e.g., SHR@10 reaches 3.4995 for BPR in ML100K), yet it remains significantly weaker than the full DIFshilling model. This finding suggests that initiating the attack from pure noise without a forward noising phase constrains the model’s ability to generate highly effective adversarial users. However, the trend is reversed when considering adversarial robustness. The forward process only variant, which introduces irregular random noise, generates attack profiles that are more difficult to detect. In contrast, the backward process only variant follows a structured denoising process, making the generated attack profiles more identifiable. Ultimately, the full DIFshilling model outperforms both ablation variants across all metrics, demonstrating that the combined use of forward noising and reverse denoising is essential for maximizing attack effectiveness while maintaining stealth against detection mechanisms.

6. Discussion

This paper introduces DIFshilling, a sophisticated attack strategy that leverages diffusion-based models to manipulate user–item interactions. To mitigate the risks posed by such attacks, we propose several countermeasures, focusing on the necessary conditions and procedural strategies required for an effective defense against shilling attacks.
To effectively counter DIFshilling attacks, recommender systems must meet several foundational conditions that enable robust defense. First, the system should employ robust user profiling, integrating diverse and detailed user features such as behavioral data, contextual interactions, and demographic information. This enables a more accurate distinction between legitimate users and synthetic profiles. Additionally, advanced detection mechanisms must be implemented to identify anomalies in user behavior, utilizing anomaly detection algorithms capable of recognizing inconsistencies in rating patterns and user preferences. Furthermore, adaptive learning models are essential for allowing systems to evolve in response to new types of shilling attacks. This requires continuously updating detection algorithms, particularly those leveraging adversarial learning techniques, to stay ahead of emerging attack strategies. By satisfying these conditions, recommender systems can provide effective protection against sophisticated shilling tactics such as DIFshilling.
Mitigating the risks associated with DIFshilling requires the adoption of several key procedural strategies. Anomaly detection plays a crucial role, employing techniques such as clustering, outlier detection, and behavior-based verification to identify user profiles that significantly deviate from established patterns. Another effective strategy is feature regularization, where user profile features are deliberately modified through masking or noise injection to prevent attackers from closely replicating real user distributions. This controlled randomness preserves personalization while reducing the effectiveness of attacks. A hybrid defense model, combining traditional machine learning with deep neural networks, further strengthens resilience by enabling the system to both detect and prevent shilling attacks through a multi-layered approach. Finally, temporal monitoring is necessary to track shifts in user preferences over time, as sudden or irregular changes may indicate an ongoing shilling attack. By implementing these strategies, recommender systems can not only detect but also proactively prevent the impact of DIFshilling, ensuring a more secure and reliable recommendation process.

7. Conclusions

In this paper, we present DIFshilling, a novel shilling attack model for recommender systems that leverages diffusion to enhance both attack effectiveness and evasion of detection. DIFshilling integrates key techniques, including an advanced filtering strategy to generate fake user profiles with rich features while preserving the personalized characteristics of real users. This is achieved by controlling the noise schedule and using the noising process as a foundation for denoising. Extensive experiments on five datasets, including two large and sparse ones, show that DIFshilling outperforms eight mainstream shilling attack methods, achieving state-of-the-art performance. Notably, DIFshilling excels not only in attack potency but also in its ability to evade detection mechanisms and remain inconspicuous in the latent space. The significance of this research extends beyond the development of a novel attack model. Understanding advanced shilling attacks like DIFshilling is crucial for identifying the vulnerabilities of recommender systems and guiding the design of more robust defense mechanisms. Future research could focus on developing adaptive noise schedules to improve generalizability and investigating real-time attack scenarios to further understand and mitigate the risks posed by advanced shilling attacks like DIFshilling. Our findings offer valuable theoretical insights and practical implications, providing a foundation for the development and defense of recommender systems in real-world applications.
Ethical Considerations: This study aims to enhance the understanding of shilling attacks in recommender systems to improve security and defense mechanisms. DIFshilling is presented as a research tool for evaluating vulnerabilities rather than promoting malicious exploitation. The experiments in this study are conducted on publicly available datasets, ensuring no violation of user privacy or ethical concerns. The findings emphasize the necessity of robust detection mechanisms and adversarial defenses to mitigate the risks posed by advanced shilling attacks. Furthermore, this work aligns with ethical research standards by providing insights that contribute to the development of more resilient recommender systems. Future research should explore defensive strategies that counteract the growing sophistication of adversarial attacks in this domain.

Author Contributions

Conceptualization, X.M.; Methodology, W.C.; Investigation, W.C.; Resources, B.L.; Writing—original draft, W.C.; Writing—review & editing, X.M.; Visualization, W.C.; Supervision, X.M. and B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Covington, P.; Adams, J.; Sargin, E. Deep Neural Networks for YouTube Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; Sen, S., Geyer, W., Freyne, J., Castells, P., Eds.; Association for Computing Machinery: New York, NY, USA, 2016; pp. 191–198. [Google Scholar] [CrossRef]
  2. Gomez-Uribe, C.A.; Hunt, N. The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Trans. Manag. Inf. Syst. 2016, 6, 1–19. [Google Scholar] [CrossRef]
  3. Lam, S.K.; Riedl, J. Shilling recommender systems for fun and profit. In Proceedings of the 13th International Conference on World Wide Web, WWW 2004, New York, NY, USA, 17–20 May 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 393–402. [Google Scholar] [CrossRef]
  4. Pang, M.; Gao, W.; Tao, M.; Zhou, Z.H. Unorganized Malicious Attacks Detection. In Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, QC, Canada, 3–8 December 2018; pp. 6976–6985. [Google Scholar]
  5. Calrino, J.A.; Kilzer, A.; Narayanan, A.; Felten, E.W.; Shmatikov, V. “You Might Also Like:” Privacy Risks of Collaborative Filtering. In Proceedings of the 32nd IEEE Symposium on Security and Privacy, SP 2011, Berkeley, CA, USA, 22–25 May 2011; pp. 231–246. [Google Scholar] [CrossRef]
  6. Gunes, I.; Kaleli, C.; Bilge, A.; Polat, H. Shilling attacks against recommender systems: A comprehensive survey. Artif. Intell. Rev. 2014, 42, 767–799. [Google Scholar] [CrossRef]
  7. Chen, H.; Li, J. Data Poisoning Attacks on Cross-domain Recommendation. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, 3–7 November 2019; pp. 2177–2180. [Google Scholar] [CrossRef]
  8. Burke, R.D.; Mobasher, B.; Bhaumik, R.; Williams, C. Segment-Based Injection Attacks against Collaborative Filtering Recommender Systems. In Proceedings of the 5th IEEE International Conference on Data Mining (ICDM 2005), Houston, TX, USA, 27–30 November 2005; pp. 577–580. [Google Scholar] [CrossRef]
  9. Lin, C.; Chen, S.; Li, H.; Xiao, Y.; Li, L.; Yang, Q. Attacking Recommender Systems with Augmented User Profiles. In Proceedings of the CIKM ’20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, 19–23 October 2020; pp. 855–864. [Google Scholar] [CrossRef]
  10. Mobasher, B.; Burke, R.D.; Bhaumik, R.; Williams, C. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. ACM Trans. Internet Techn. 2007, 7, 23. [Google Scholar] [CrossRef]
  11. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. In Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, Virtual, 6–12 December 2020; pp. 8840–8850. [Google Scholar]
  12. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; Chen, M. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv 2022, arXiv:2204.06125. [Google Scholar]
  13. Kwon, H. AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples. Multim. Tools Appl. 2024, 83, 57943–57962. [Google Scholar] [CrossRef]
  14. Laskar, A.K.; Ahmed, J.; Sohail, S.S.; Nafis, A.; Haq, Z.A. Shilling Attacks on Recommender System: A Critical Analysis. In Proceedings of the 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 15–17 March 2023; pp. 1617–1622. [Google Scholar]
  15. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  16. Kapoor, S.; Kapoor, V.; Kumar, R. A review of attacks and its detection attributes on collaborative recommender systems. Int. J. Adv. Res. Comput. Sci. 2017, 8, 1188. [Google Scholar] [CrossRef]
  17. Mobasher, B.; Burke, R.D.; Bhaumik, R.; Sandvig, J.J. Attacks and Remedies in Collaborative Recommendation. IEEE Intell. Syst. 2007, 22, 56–63. [Google Scholar] [CrossRef]
  18. O’Mahony, M.P.; Hurley, N.J.; Silvestre, G.C.M. Recommender Systems: Attack Types and Strategies. In Proceedings of the Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference, Pittsburgh, PA, USA, 9–13 July 2005; pp. 334–339. [Google Scholar]
  19. Tang, J.; Wen, H.; Wang, K. Revisiting Adversarially Learned Injection Attacks Against Recommender Systems. In Proceedings of the RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Virtual Event, 22–26 September 2020; pp. 318–327. [Google Scholar] [CrossRef]
  20. Christakopoulou, K.; Banerjee, A. Adversarial attacks on an oblivious recommender. In Proceedings of the 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, 16–20 September 2019; pp. 322–330. [Google Scholar] [CrossRef]
  21. Lin, C.; Chen, S.; Zeng, M.; Zhang, S.; Gao, M.; Li, H. Shilling Black-Box Recommender Systems by Learning to Generate Fake User Profiles. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 1305–1319. [Google Scholar] [CrossRef]
  22. Liu, X.; Xiao, Y.; Jiao, X.; Zheng, W.; Ling, Z. A novel Kalman Filter based shilling attack detection algorithm. Math. Biosci. Eng. 2020, 17, 1558–1577. [Google Scholar] [CrossRef] [PubMed]
  23. Sundar, A.P.; Li, F.; Zou, X.; Gao, T.; Russomanno, E.D. Understanding Shilling Attacks and Their Detection Traits: A Comprehensive Survey. IEEE Access 2020, 8, 171703–171715. [Google Scholar] [CrossRef]
  24. Fang, M.; Yang, G.; Gong, N.Z.; Liu, J. Poisoning Attacks to Graph-Based Recommender Systems. In Proceedings of the 34th Annual Computer Security Applications Conference, ACSAC 2018, San Juan, PR, USA, 3–7 December 2018; pp. 381–392. [Google Scholar] [CrossRef]
  25. Yang, G.; Gong, N.Z.; Cai, Y. Fake Co-visitation Injection Attacks to Recommender Systems. In Proceedings of the 24th Annual Network and Distributed System Security Symposium, NDSS 2017, San Diego, CA, USA, 26 February–1 March 2017; pp. 1–12. [Google Scholar]
  26. Fang, M.; Gong, N.Z.; Liu, J. Influence Function based Data Poisoning Attacks to Top-N Recommender Systems. In Proceedings of the WWW ’20: The Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 3019–3025. [Google Scholar] [CrossRef]
  27. Li, B.; Wang, Y.; Singh, A.; Vorobeychik, Y. Data Poisoning Attacks on Factorization-Based Collaborative Filtering. In Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain, 5–10 December 2016; pp. 1885–1893. [Google Scholar]
  28. Chen, L.; Xu, Y.; Xie, F.; Huang, M.; Zheng, Z. Data poisoning attacks on neighborhood-based recommender systems. Trans. Emerg. Telecommun. Technol. 2021, 32, e3872. [Google Scholar] [CrossRef]
  29. Linden, G.; Smith, B.; York, J. Industry Report: Amazon.com Recommendations: Item-to-Item Collaborative Filtering. IEEE Distrib. Syst. Online 2003, 4, 76–80. [Google Scholar]
  30. Burke, R.; Mobasher, B.; Bhaumik, R. Limited knowledge shilling attacks in collaborative filtering systems. In Proceedings of the 3rd International Workshop on Intelligent Techniques for Web Personalization (ITWP 2005), 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), Edinburgh, UK, 1 August 2005; pp. 17–24. [Google Scholar]
  31. Seminario, C.E.; Wilson, D.C. Attacking item-based recommender systems with power items. In Proceedings of the Eighth ACM Conference on Recommender Systems, RecSys ’14, Foster City, Silicon Valley, CA, USA, 6–10 October 2014; pp. 57–64. [Google Scholar] [CrossRef]
  32. Guo, S.; Bai, T.; Deng, W. Targeted Shilling Attacks on GNN-based Recommender Systems. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM 2023, Birmingham, UK, 21–25 October 2023; Frommholz, I., Hopfgartner, F., Lee, M., Oakes, M., Lalmas, M., Zhang, M., Santos, R.L.T., Eds.; ACM: New York, NY, USA, 2023; pp. 649–658. [Google Scholar] [CrossRef]
  33. Liu, S.; Yu, S.; Li, H.; Yang, Z.; Duan, M.; Liao, X. A novel shilling attack on black-box recommendation systems for multiple targets. Neural Comput. Appl. 2025, 37, 3399–3417. [Google Scholar] [CrossRef]
  34. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–10. [Google Scholar]
  35. Kawar, B.; Elad, M.; Ermon, S.; Song, J. Denoising Diffusion Restoration Models. In Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, 28 November–9 December 2022; pp. 1–12. [Google Scholar]
  36. Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009; pp. 452–461. [Google Scholar]
  37. Wang, X.; Jin, H.; Zhang, A.; He, X.; Xu, T.; Chua, T.S. Disentangled Graph Collaborative Filtering. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, 25–30 July 2020; pp. 1001–1010. [Google Scholar] [CrossRef]
  38. Xue, H.; Dai, X.; Zhang, J.; Huang, S.; Chen, J. Deep Matrix Factorization Models for Recommender Systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, 19–25 August 2017; pp. 3203–3209. [Google Scholar] [CrossRef]
  39. van den Berg, R.; Kipf, T.N.; Welling, M. Graph Convolutional Matrix Completion. arXiv 2017, arXiv:1706.02263. [Google Scholar]
  40. Lin, Z.; Tian, C.; Hou, Y.; Zhao, W.X. Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning. In Proceedings of the WWW ’22: The ACM Web Conference 2022, Lyon, France, 25–29 April 2022; pp. 2320–2329. [Google Scholar] [CrossRef]
  41. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural Collaborative Filtering. In Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar] [CrossRef]
  42. Wang, X.; He, X.; Wang, M.; Feng, F.; Chua, T.S. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, 21–25 July 2019; pp. 165–174. [Google Scholar] [CrossRef]
  43. Zhao, W.X.; Hou, Y.; Pan, X.; Yang, C.; Zhang, Z.; Lin, Z.; Zhang, J.; Bian, S.; Tang, J.; Sun, W.; et al. RecBole 2.0: Towards a More Up-to-Date Recommendation Library. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 4722–4726. [Google Scholar] [CrossRef]
  44. Zhang, Y.; Tan, Y.; Zhang, M.; Liu, Y.; Chua, T.S.; Ma, S. Catch the Black Sheep: Unified Framework for Shilling Attack Detection Based on Fraudulent Action Propagation. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, 25–31 July 2015; pp. 2408–2414. [Google Scholar]
Figure 1. The threat model of a shilling attack.
Figure 1. The threat model of a shilling attack.
Applsci 15 03412 g001
Figure 2. Overview of DIFshilling.
Figure 2. Overview of DIFshilling.
Applsci 15 03412 g002
Figure 3. Heatmaps showing attack results. The lighter a cell is, the larger the corresponding HR@10.
Figure 3. Heatmaps showing attack results. The lighter a cell is, the larger the corresponding HR@10.
Applsci 15 03412 g003
Figure 4. The precision and recall of detectors to identify fake user profiles by different attack methods. The circle, pentagram, and square represent the ML100K, FilmTrust, and Automotive datasets.
Figure 4. The precision and recall of detectors to identify fake user profiles by different attack methods. The circle, pentagram, and square represent the ML100K, FilmTrust, and Automotive datasets.
Applsci 15 03412 g004
Figure 5. Real and fake user profiles of ML100K in the latent space.
Figure 5. Real and fake user profiles of ML100K in the latent space.
Applsci 15 03412 g005
Figure 6. The attack performance and detection results under different percentages of fake users in ML100K, FilmTrust, and Automotive datasets. (a) The attack performance under different percentages of fake users in ML100K. (b) The attack performance under different percentages of fake users in FilmTrust. (c) The attack performance under different percentages of fake users in Automotive. (d) Precision and recall of detecting fake users in ML100K, FilmTrust, and Automotive.
Figure 6. The attack performance and detection results under different percentages of fake users in ML100K, FilmTrust, and Automotive datasets. (a) The attack performance under different percentages of fake users in ML100K. (b) The attack performance under different percentages of fake users in FilmTrust. (c) The attack performance under different percentages of fake users in Automotive. (d) Precision and recall of detecting fake users in ML100K, FilmTrust, and Automotive.
Applsci 15 03412 g006
Figure 7. Different percentages of fake user injection in the FilmTrust latent space.
Figure 7. Different percentages of fake user injection in the FilmTrust latent space.
Applsci 15 03412 g007
Figure 8. The precision and recall of detectors to identify fake user profiles by DIFshiilng and its two variants.
Figure 8. The precision and recall of detectors to identify fake user profiles by DIFshiilng and its two variants.
Applsci 15 03412 g008
Table 1. The benchmark of shilling attack methods used in experiments.
Table 1. The benchmark of shilling attack methods used in experiments.
Method NameDescriptionTypeReferences
Random AttackChooses items at random, applying ratings from a normal distribution with parameters matching the overall ratings’ mean and variance.Algorithm-agnostic[16,17]
Average AttackUses a normal distribution for ratings with parameters based on sampled items’ ratings.Algorithm-agnostic[16]
Segment AttackAssigns maximal ratings to selected items and minimal to others, targeting specific system segments.Algorithm-agnostic[16]
Bandwagon AttackExploits item popularity by assigning maximal ratings to popular items and others randomly.Algorithm-agnostic[16,18]
AIAGenerates profiles maximizing an attack objective using a bilevel optimization framework under a black-box scenario.Algorithm- specific[19]
DCGANTreats shilling attacks as a repeated general-sum game, using DCGAN for fake profile generation.GAN-based[20]
AUSHTrains models considering shilling, reconstruction, and adversarial losses to balance attack impact and detectability.GANs-based[9]
Leg-UPImproves AUSH by focusing on attack transferability and invisibility, leveraging direct loss functions.GAN-based[9,21]
Table 2. Statistics, thresholds, and attack sizes of the datasets.
Table 2. Statistics, thresholds, and attack sizes of the datasets.
DatasetUsersItemsRatingsSparsityThresholdAttack Size
ML100K9431682100,00093.70%3610
FilmTrust78072128,79994.88%158
Automotive2928183520,47399.62%1230
Fashion749,232186,189883,63699.99%47500
Book-crossing278,858271,3791,149,78099.99%102800
Table 3. Attack performance in ML100K (%). The best results are marked in bold.
Table 3. Attack performance in ML100K (%). The best results are marked in bold.
ModelAverageRandomSegmentBandwagonAIADCGANAUSHLeg-UPDIFshilling
HR@10 (%)
BPR0.84840.42420.84840.95442.3332.12091.69671.27255.4083
DGCF0.42420.1060.21210.1060.1060.1060.53020.424215.6946
DMF0.84840.84841.48461.90882.96921.27253.92363.817613.2556
GCMC0.95440.74231.06041.06041.69671.69672.54513.07539.1198
NCL0.42420.31810.74230.31811.48461.16651.59071.16653.3934
NeuMF0.31810.31813.07530.31810.74231.06041.90881.37864.0297
NGCF1.06041.80284.34786.57487.2114.34785.40835.196211.877
NDCG@10 (%)
BPR0.14170.14170.35380.3220.23590.2240.69590.51661.1624
DGCF0.14090.14090.14090.23540.88630.03070.3070.20681.0218
DMF0.97320.97320.97320.77560.09480.0660.08490.10641.6479
GCMC0.04570.04570.04570.0960.10640.03190.32190.18243.439
NCL0.10510.14360.14360.10790.28640.47290.59610.58160.8004
NeuMF0.11410.11410.11410.10830.28140.48350.80290.56281.7754
NGCF0.6990.74351.65120.7122.76641.96933.05073.44345.2711
Table 4. Attack performance in Fashion (%). The best results are marked in bold.
Table 4. Attack performance in Fashion (%). The best results are marked in bold.
ModelsAverageRandomSegmentBandwagonAIADCGANAUSHLeg-UPDIFshilling
HR@10 (%)
BPR0.08440.05620.08440.02810.2250.08440.05620.08440.5624
DGCF0.05620.00000.02810.02810.05620.02810.02810.05620.225
DMF0.00000.00000.00000.00000.00000.00000.00000.00000.0844
GCMC0.00000.00000.02810.02810.00000.03090.02270.02810.0281
NCL0.02810.00000.02810.02810.00000.02810.02810.02810.1687
NeuMF0.02810.02810.02810.05620.00000.00000.08440.00000.4218
NGCF0.05620.08440.08440.11250.11250.05620.33750.00000.3937
NDCG@10 (%)
BPR0.02710.02710.08310.11010.10830.03020.0370.0330.123
DGCF0.01730.00000.010.01210.01890.00890.00850.00670.0906
DMF0.00000.00000.00000.00000.00000.00000.00000.00000.0254
GCMC0.00000.00000.00810.00940.00000.04230.83050.74620.0085
NCL0.00850.00000.03440.03020.00000.53860.01090.02160.177
NeuMF0.01770.01770.03450.03870.00000.21060.02940.00000.2433
NGCF0.10520.1180.18470.15360.04320.05730.15030.00000.2225
Table 5. The attack performance of DIFshilling and its two variants on ML100K and FilmTrust.
Table 5. The attack performance of DIFshilling and its two variants on ML100K and FilmTrust.
HR@10 (%)BPRDGCFDMFGCMCNCLNeuMFNGCF
ML100KOurs5.408315.694613.25569.11983.39344.029711.8770
Forward1.37860.63620.10600.63630.31811.69672.8632
Backward3.49950.95442.54511.37861.06041.90882.9692
FilmTrustOurs31.787231.241533.697160.027330.832230.968639.8363
Forward0.40931.364326.46660.13640.13640.13640.1364
Backward2.72851.773529.058712.00550.27290.13640.2729
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, W.; Ma, X.; Liu, B. DIFshilling: A Diffusion Model for Shilling Attacks. Appl. Sci. 2025, 15, 3412. https://doi.org/10.3390/app15063412

AMA Style

Chen W, Ma X, Liu B. DIFshilling: A Diffusion Model for Shilling Attacks. Applied Sciences. 2025; 15(6):3412. https://doi.org/10.3390/app15063412

Chicago/Turabian Style

Chen, Weizhi, Xingkong Ma, and Bo Liu. 2025. "DIFshilling: A Diffusion Model for Shilling Attacks" Applied Sciences 15, no. 6: 3412. https://doi.org/10.3390/app15063412

APA Style

Chen, W., Ma, X., & Liu, B. (2025). DIFshilling: A Diffusion Model for Shilling Attacks. Applied Sciences, 15(6), 3412. https://doi.org/10.3390/app15063412

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop