Previous Article in Journal
Flare Removal Model Based on Sparse-UFormer Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Support Vector Data Description with Truncated Loss Function for Outliers Depression

Department of Automatic Control, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(8), 628; https://doi.org/10.3390/e26080628
Submission received: 11 June 2024 / Revised: 19 July 2024 / Accepted: 23 July 2024 / Published: 25 July 2024
(This article belongs to the Special Issue Applications of Information Theory to Machine Learning)

Abstract

:
Support vector data description (SVDD) is widely regarded as an effective technique for addressing anomaly detection problems. However, its performance can significantly deteriorate when the training data are affected by outliers or mislabeled observations. This study introduces a universal truncated loss function framework into the SVDD model to enhance its robustness and employs the fast alternating direction method of multipliers (ADMM) algorithm to solve various truncated loss functions. Moreover, the convergence of the fast ADMM algorithm is analyzed theoretically. Within this framework, we developed the truncated generalized ramp, truncated binary cross entropy, and truncated linear exponential loss functions for SVDD. We conducted extensive experiments on synthetic and real-world datasets to validate the effectiveness of these three SVDD models in handling data with different noise levels, demonstrating their superior robustness and generalization capabilities compared to other SVDD models.

1. Introduction

Anomaly detection refers to the identification of data points in a dataset that deviate from normal behavior. These deviations are known as anomalies or outliers in various application domains. This mode of detection is extensively used in real-world settings, including credit card detection, insurance detection, cybersecurity intrusion detection, error detection in security systems, and military activity monitoring [1,2]. However, the acquisition of anomaly data in practical applications, such as medical diagnostics, machine malfunction detection, and circuit quality inspection, is expensive [3,4]. Consequently, there is significant interest in one-class classification (OCC) problems, where training samples only include normal data (also known as target data) or normal data with a small number of anomalies (also referred to as non-target data) [5,6,7]. In this context, it is important to define the following terms:
  • Normal data: Normal data refers to data points that conform to the characteristics and behavior patterns of the majority of data points within a dataset. They represent the normal operating state of a system or process.
  • Anomalies: Anomalies are data points that significantly deviate from normal patterns and usually reflect actual problems or critical events in the system.
  • Outliers: Outliers are data points that are significantly different from other data points in the dataset, which may be due to natural fluctuations, special circumstances, or noise.
  • Noise: Noise refers to irregular, random errors or fluctuations, usually caused by measurement errors or data entry mistakes, and does not reflect the actual state of the system.
Support vector data description (SVDD) is a method extensively used for one-class classifications (OCCs) [8]. The core idea of SVDD is to construct a hyper-sphere of minimal volume that encompasses all (or most) of the target class samples. Data points inside the hyper-sphere are considered normal, while those outside are considered anomalies. SVDD can be easily integrated with popular kernel methods or some deep neural network models [7], making it highly scalable and flexible. Due to these attractive features, SVDD has garnered significant attention and has been extensively developed. SVDD is regarded as an effective and excellent technique for anomaly detection problems; however, it remains sensitive to outliers and noise present in training datasets. In real-world scenarios, various issues, such as instrument failure, formatting errors, and unrepresentative sampling, result in datasets with anomalies, which degrade the performance of the SVDD [9,10].
The existing methods for mitigating the impact of noise are typically categorized as follows:
In order to reduce the impact of outliers on the OCC method, researchers have attempted to remove outliers through data preprocessing methods. Stanley Fong used methods such as cluster analysis to remove anomalies from the training set to achieve a robust classifier [11]. Breunig et al. tried to assign an outlier score to each sample in the dataset by estimating its local density, known as LOF [12]. Zheng et al. used LOF to filter raw samples and remove outliers [13]. Khan et al. [14] and Andreou and Karathanassi [15] calculated the interquartile range (IQR) of training samples, which provides a method for indicating a boundary beyond which samples are marked as outliers and removed. Clustering (k-means, DBSCAN) or LOF methods can significantly reduce noise points in data preprocessing, thereby improving the quality and effectiveness of subsequent model training. For datasets with obvious noise and significant distribution characteristics, preprocessing methods can be very effective in enhancing model performance. However, preprocessing methods have several issues: they require additional computational resources and time, especially with large datasets, potentially making the preprocessing step time-consuming. Moreover, there is a risk of overfitting and inadvertently deleting useful normal data points, impacting the model’s ability to accurately detect anomalies. Additionally, these preprocessing methods are sensitive to parameter settings, necessitating careful selection to achieve satisfactory results.
Zhao et al. proposed the dynamic radius SVDD method that accounts for hyper-sphere radius information and the existing data distribution [16,17]. This approach achieves a more flexible decision boundary by assigning different radii to different samples. Moreover, the framework for the dynamic radius SVDD method is based on the traditional SVDD framework. However, if the traditional SVDD has not undergone adequate training, the good performance of these dynamic approaches will be difficult to guarantee.
Density-weighted SVDD, position-weighted SVDD, Stahel–Donoho outlier-weighted SVDD, global plus local joint-weighted SVDD, and confidence-weighted SVDD are examples of weighted methods [18,19,20,21,22,23,24]. These methods assign smaller weights to sparse data, which are commonly outliers, thus excluding them from the sphere. These methods balance the target class data and outliers in the training phase, thus enhancing the classification performance, especially when the data are contaminated by outliers. However, when the number of outliers in the dataset increases and they form sparse clusters, the number of outliers might surpass that of normal samples. In such cases, weighted methods assign higher weights to the outliers and lower weights to the normal samples, leading to decreased algorithm performance.
The convex property of the hinge loss function of the SVDD algorithm makes it sensitive to outliers. To address this issue, Xing et al. proposed a new robust least squares one-class support vector machine (OCSVM) that employs a bounded, non-convex entropic loss function, instead of the unbounded convex quadratic loss function used in traditional least squares OCSVM [25]. The non-convex nature of the ramp loss function makes this model more robust than the traditional OCSVM [26]. Tian et al. introduced the ramp loss function to the traditional OCSVM to create the Ramp-OCSVM model [27], and the non-convex nature of this model makes it more robust than the traditional OCSVM. Xing et al. enhanced the robustness of the OCSVM by introducing a re-scaled hinge loss function [28]. Additionally, Zhong et al. proposed a new robust SVDD method, called pinball loss SVDD [29], to perform OCC tasks when the data are contaminated by outliers. The pinball loss function ensures minimal dispersion at the center of the sphere, thus creating a tighter decision boundary. Recently, Zheng introduced a mixed exponential loss function to the design of the SVDD model, enhancing its robustness and making its implementation easier [30].
Extensive research has shown that, as a result of unbounded convex loss functions being sensitive to anomalies, loss functions with boundedness or bounded influence functions are more robust to the influence of outliers. To address this issue, researchers introduced an upper limit to unbounded loss functions, effectively preventing them from increasing beyond a certain point. The truncated loss function thus makes the SVDD model more robust. The advantages of truncated loss functions include:
  • Robustness to noise: Truncated loss functions can enhance the model’s robustness and stability by limiting the impact of outliers without removing data points.
  • Reduction of error propagation: In anomaly detection tasks, outliers may significantly contribute to the loss function, leading to error propagation and model instability. Truncated loss functions can effectively reduce error propagation caused by outliers, thereby improving overall model performance.
  • Generalization ability: Using truncated loss functions can prevent the model from overfitting to outliers, enhancing the model’s generalization ability. Truncated loss functions are well-suited for various types of datasets and noise conditions, particularly when noise is not obvious or easily detectable.
However, robust SVDD algorithms still face considerable challenges in the research. They are designed to address specific types of losses and lack an appropriate framework for constructing robust loss functions. Thus, researchers are required to learn how to use different algorithms and modify loss functions before use. Since truncated loss functions are often non-differentiable, methods such as the difference of convex algorithm (DCA) [31] and concave–convex procedures (CCCPs) [32,33] are commonly employed to provide solutions. For some truncated loss functions, the DCA cannot ensure straightforward decompositions or the direct use of comprehensive convex toolboxes, potentially increasing development and maintenance costs [34]. At present, no unified framework exists in the literature for the design of robust loss functions or a unified optimization algorithm. Therefore, even though this is challenging, providing a new bounded strategy for the SVDD model is crucial, with the potential for developing more efficient and universally applicable solutions.
In response to the several issues previously outlined, this study proposes a universal framework for the truncated loss functions of the SVDD model. To address and solve the non-differentiable, non-convex optimization problem introduced by the truncated loss function, we employ the fast ADMM algorithm. Our contributions to this field of study are as follows:
  • We define a universal truncated loss framework that smoothly and adaptively binds loss functions, while preserving their symmetry and sparsity.
  • To solve different truncated loss functions, we propose the use of a unified proximal operator algorithm.
  • We introduce a fast ADMM algorithm to handle any truncated loss function within a unified scheme.
  • We implement the proposed robust SVDD model for various datasets with different noise intensities. The experimental results for real datasets show that the proposed model exhibits superior resistance to outliers and noise compared to more traditional methods.
The remainder of this paper is organized as follows:
Section 2: We review related support vector data description (SVDD) models, providing a foundational understanding of the existing methodologies and their limitations.
Section 3: We propose a general framework for truncated loss functions. Within this framework, we examine the representative loss functions’ proximal operators and present a universal algorithm for solving these proximal operators.
Section 4: This section introduces the SVDD model that utilizes the truncated loss function, detailing its structure and theoretical framework.
Section 5: A new algorithm for solving the SVDD model with truncated loss functions is presented. This section also includes an analysis of the algorithm’s convergence properties, ensuring that the method is both robust and reliable.
Section 6: Numerical experiments and parameter analysis are conducted to validate the effectiveness of the proposed model. This section provides empirical evidence of the model’s performance across various datasets and noise scenarios.
Section 7: The conclusion summarizes the findings and contributions of the study, and discusses potential future research directions.

2. Related Works

SVDD has been widely applied in anomaly detection, and numerous learning algorithms based on SVDD have been proposed. In this section, we provide a brief overview of these algorithms.

2.1. SVDD

The goal of SVDD is to discover a hyper-sphere that encompasses target samples, while excluding non-target samples located outside it [8]. The objective function of SVDD is represented by the following equation:
min R , μ , ξ i   R 2 + C i ξ i s . t   φ ( x i ) μ 2 R 2 + ξ i ,   ξ i > 0 ,   i
where μ and R represent the radius and center of the hyper-sphere, C is a regularization parameter, and ξ i is a slack variable. By using Lagrange multipliers and incorporating the constraints into the objective function, the dual problem of Equation (1) can be expressed as:
max   i a i ( x i x i ) i , j a i a j ( x i x j ) s . t   i a i = 1 ,   0 a i C ,   i
where α i is the Lagrange multiplier, the optimization problem in Equation (2) is a standard quadratic programming problem, and α i can be obtained using quadratic programming algorithms. μ and R can be calculated using the following equation:
μ = i = 1 n α i φ ( x i )
R 2 = ( x s x s ) 2 i α i ( x s x i ) + i , j α i α j ( x i x j )
where x s represents support vectors. The decision function for the test sample z is presented as follows:
f ( z ) = s i g n ( R 2 z μ 2 ) = s i g n { R 2 ( z z ) + 2 i a i ( z x i ) i , j a i a j ( x i x j ) }
If f ( z ) 0 , then sample z belongs to the target class; otherwise, z is considered a non-target class sample.

2.2. Robust SVDD Variants

Due to its sensitivity to outliers, the classification performance of SVDD significantly deteriorates when the data are contaminated. Thus, to enhance the robustness of SVDD, various improved SVDD methods have been proposed over the past decades.

2.2.1. Weighted SVDDs

One common approach is the use of weighted SVDD methods, where different slack variables are assigned different weights [18,19,20,21,22,23,24]. Although the specific methods for weight distribution vary, they can generally be represented in a unified form:
min R , μ , ξ i   R 2 + C i w i ξ i s . t   φ ( x i ) μ 2 R 2 + ξ i ,   ξ i > 0 ,   i
where { w i } i = 1 n represents pre-calculated weights. The density weighting method permits these weights to be calculated as follows [20]:
w i = 1 d ( x i , x i k ) max { d ( x 1 , x 1 k ) , , d ( x n , x n k ) }
where x i k denotes the k-nearest neighbor value of x i , and d ( x i , x i k ) denotes the Euclidean distance between x i and x i k . Due to outliers typically being located in relatively low-density areas, the distance to their neighboring samples is greater when compared to normal samples, which results in their smaller weights. The R-SVDD algorithm constructs weights by introducing a local density to each data point based on truncated distances [19].
ρ i = j exp ( ( d i j d c ) 2 )
where d i j denotes the distance between x i and x j , and d c is the truncation distance. The calculation of the weight function is as follows:
w ( x i ) = ρ ( x i ) max { ρ ( x 1 ) , , ρ ( x n ) }
Other methods for calculating can refer to [21,22,23,24]. The dual problem of Equation (6) is represented by the following equation:
max   i α i ( x i x i ) i , j α i α j ( x i x j ) s . t   i α i = 1 ,   0 α i w i C ,   i
From this equation, it can be observed that each Lagrange multiplier has an upper limit, denoted as α i w i C .

2.2.2. Pinball Loss SVDD

The pinball loss SVDD (Pin-SVDD) modifies the hinge loss SVDD by replacing its loss function with the pinball loss function to create an optimized problem formulation [29]. This modification enables a more robust handling of outliers by adjusting the sensitivity toward deviations, depending on their direction relative to the decision boundary.
min R , μ , ξ i   R 2 + C i ξ i s . t   φ ( x i ) μ 2 ( 1 τ ) R 2 + ξ i , φ ( x i ) μ 2 1 τ ξ i , i
where 0 < τ 1 is a constant. With the use of the Lagrange multipliers method, the dual optimization problem in Equation (11) can be expressed as follows:
max   i α i ( g 1 ( x i x i ) + g 2 j = 1 n ( x i x j ) ) + h i , j α i α j ( x i x j ) s . t   i α i = 1 1 τ ,   0 α i C ,   i
where h = ( 1 τ ) 2 τ n C + 1 , g 1 = 1 τ , and g 2 = 2 ( 1 τ ) τ c τ n c + 1 . Zhong demonstrated that the use of a pinball to minimize scattering at the center of the sphere enhances the robustness of the developed model.

2.2.3. SVDD with Mixed Exponential Loss Function

Zheng used a mixed exponential loss function to design a classification model, and the optimization problem of the method is presented by the following equation [30]:
min R , μ , ξ i   R 2 + C i ρ ( ξ i ) s . t   φ ( x i ) μ 2 R 2 + ξ i ,   ξ i > 0 ,   i
where ρ ( ξ i ) is a mixed exponential function of ξ i , expressed as follows:
ρ ( ξ i ) = λ exp ( τ 1 ξ i ) + ( 1 λ ) exp ( τ 2 ξ i )
where τ 1 > 0 and τ 2 > 0 are two scale parameters, and 1 λ 0 is a mixture parameter used to balance the contributions of the two exponential functions. The mixed exponential loss function highlights the importance of samples involved in the target class while reducing the influence of samples that are outliers. This approach significantly enhances the robustness of the SVDD model. This loss function achieves more accurate and stable anomaly detection results in various settings by preserving the integrity of the target class and diminishing the effect of potential anomalies.

3. Truncated Loss Function

SVDD models with unbounded loss functions can achieve satisfactory results when addressing scenarios lacking noise. However, the continual growth of these loss functions results in the collapse of the model when it is subjected to noise. Therefore, truncating the SVDD model’s loss function makes it more robust. The general definition of a truncated loss function is as follows:
L ( u , δ ) = ϕ ( u ) ( ϕ ( u ) δ ) +
where δ is a constant, and ϕ ( u ) is an unbounded loss function, such that, when u 0 , ϕ ( u ) = 0 . Since ϕ ( u ) is an abstract function, a general form of the truncated loss function includes several loss functions. The three specific truncated loss functions we created in our study are present as follows:
  • Truncated generalized ramp loss function: L ϕ R ( u , δ ) = min { 0 , max ( δ , ϕ R ( u ) ) } , where ϕ R ( u ) = u v , u , v > 0 .
  • Truncated binary cross entropy loss function: L ϕ f ( u , δ ) = min { 0 , max ( δ , ϕ f ( u ) ) } , where ϕ f ( u ) = log ( 1 + u θ ) , u , θ > 0 .
  • Truncated linear exponential loss function: L ϕ L ( u , δ ) = min { 0 , max ( δ , ϕ L ( u ) ) } , where ϕ L ( u ) = exp ( a y u ) a y u 1 , u , a > 0 , and y = ± 1 .
Assuming the truncation point u = δ , the mathematical properties of the three truncated loss functions presented above can be summarized as follows:
  • For samples with u 0 , the loss value is 0; for samples with u > δ , the loss value is δ . Thus, the general truncated loss function exhibits sparsity and robustness to outliers.
  • L ϕ R ( u , δ ) and L ϕ M ( u , δ ) are truncated concave loss functions, which are non-differentiable at u = 0 , δ . L ϕ L ( u , δ ) is a truncated convex loss function, which is non-differentiable at u = δ and differentiable at u = 0 .
  • L ϕ R ( u , δ ) and L ϕ f ( u , δ ) exhibit explicit expressions for the proximal operators, while L ϕ L ( u , δ ) does not.
In the next section, we provide explicit expressions for the proximal operators of L ϕ R ( u , δ ) and L ϕ f ( u , δ ) .

3.1. Proximal Operators of Truncated Loss Functions

Definition 1 
(Proximal Operator [35]). Assume f : ¯ is a proper lower-semi-continuous loss function. The expression for the proximal operator of f ( u ) at x is defined as follows:
p r o x λ f ( x ) = arg min { f ( u ) + 1 2 λ u x 2 }
when f ( u ) is a convex loss function, it presents a single-value proximal operator; when f ( u ) is a non-convex loss function, it exhibits a multi-value proximal operator.
Lemma 1 
([36]). When f ( u ) = log ( 1 + u θ ) , let v ( x ) = arg min u { log ( 1 + u θ ) + 1 2 λ u x 2 } , x . The expressions for the proximal operators are as follows:
v ( x ) = { v ¯ ( x ) ( x θ ) 2 4 ( λ x θ ) 0   a n d   u > 0 0 o t h e r s i z e
v ¯ ( x ) = min { 0 , [ ( x θ ) + ( x θ ) 2 4 ( λ x θ ) 2 ] + , [ ( x θ ) ( x θ ) 2 4 ( λ x θ ) 2 ] + } .
Lemma 2. 
The explicit expression of the L ϕ R ( u , δ ) proximal operator is as follows:
1. 
When 0 < λ < 2 δ v 2 , the explicit expression of the L ϕ R ( u , δ ) proximal operator is presented as follows:
p r o x λ L R ( x ) = { x x > δ v + λ 2 v { x , x λ v } x = δ v + λ 2 v x λ v λ v < x < δ v + λ 2 v 0 0 x λ v x x < 0
2. 
When 0 < λ < 2 δ v 2 , the explicit expression of the L ϕ R ( u , δ ) proximal operator is as follows:
p r o x λ L R ( x ) = { x x > 2 λ δ { x , 0 } x = 2 λ δ 0 0 x 2 λ δ x x < 0
Proof of Lemma 1. 
Equation (16) exhibits that p r o x L ϕ R is a local minimum of the following piecewise function:
φ ( u ) = { φ 1 ( u ) = δ + 1 2 λ ( u x ) 2 u > δ v φ 2 ( u ) = δ + 1 2 λ ( 1 x ) 2 u = δ v φ 3 ( u ) = u v + 1 2 λ ( u x ) 2 0 < u < δ v φ 4 ( u ) = x 2 2 λ u = 0 φ 5 ( u ) = 1 2 λ ( u x ) 2 u < 0
The minima of the piecewise functions φ 1 ( u ) , φ 2 ( u ) , φ 3 ( u ) , φ 4 ( u ) , and φ 5 ( u ) are located at u 1 * = x , u 2 * = 1 , u 3 * = x λ v , u 4 * = 0 , and u 5 * = x , with minimum values of φ 1 ( u 1 * ) = δ , φ 2 ( u 2 * ) = δ + 1 2 λ ( δ v x ) 2 , φ 3 ( u 3 * ) = x v λ 2 v 2 , φ 4 ( u 4 * ) = x 2 2 λ , and φ 5 ( u 5 * ) = 0 , respectively.
Since 0 < u 3 * = x λ v < v δ , it follows that λ v < x < v δ + λ v . If φ 1 ( u 1 * ) < φ 3 ( u 3 * ) , then x > δ v + λ 2 v . When δ v + λ 2 v λ v and x is in the interval ( λ v , v δ + λ v ) , φ 1 ( u 1 * ) < φ 3 ( u 3 * ) .
When 0 < λ < 2 δ v 2 , the following conclusion can be reached by comparing the values of φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) , and φ 5 ( u 5 * ) .
(1.1)
Since x > v δ + λ 2 v , we achieve min { φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) } > φ 1 ( u 1 * ) , which means u * = u 1 * = x .
(1.2)
Since x = v δ + λ 2 v , we obtain min { φ 2 ( u 2 * ) , φ 4 ( u 4 * ) } > φ 1 ( u 1 * ) = φ 3 ( u 3 * ) , which means u * = u 1 * = x or u * = u 3 * = x λ v .
(1.3)
Since λ v < x < v δ + λ 2 v , we achieve min { φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 4 ( u 4 * ) } > φ 3 ( u 3 * ) , which means u * = u 3 * = x λ v .
(1.4)
Since 0 x λ v , we obtain min { φ 1 ( u 1 * ) , φ 2 ( u 3 * ) , φ 3 ( u 3 * ) } > φ 4 ( u 4 * ) , which means u * = u 4 * = 0 .
(1.5)
Since x < 0 , we achieve min { φ 2 ( u 2 * ) , φ 4 ( u 4 * ) } > φ 5 ( u 5 * ) , which means u * = u 5 * = x .
According to (1.1)–(1.5), Equation (18) can be derived.
When λ 2 δ v 2 , the following conclusion can be reached by comparing the values of φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) , and φ 5 ( u 5 * ) .
(2.1)
As x > 2 λ δ , we obtain min { φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) } > φ 1 ( u 1 * ) , which means u * = u 1 * = x .
(2.2)
As x = 2 λ δ , we obtain min { φ 2 ( u 2 * ) , φ 3 ( u 3 * ) } > φ 1 ( u 1 * ) = φ 4 ( u 4 * ) , which either means u * = u 1 * = x or u * = u 4 * = 0 .
(2.3)
As 0 x < 2 λ δ , we obtain min { φ 1 ( u 1 * ) , φ 2 ( u 3 * ) , φ 3 ( u 3 * ) } > φ 4 ( u 4 * ) , which means u * = u 4 * = 0 .
(2.4)
As x < 0 , we obtain min { φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) } > φ 5 ( u 5 * ) , which means u * = u 5 * = x .
According to (2.1)–(2.4), Equation (19) can be derived. □
Lemma 3. 
The expression representing the proximal operator of the truncation function is as follows:
p r o x λ L ( x ) = { x x > λ   a n d   φ 1 ( u 1 * ) < φ 3 ( u 3 * )   a n d   φ 1 ( u 1 * ) < φ 4 ( u 4 * ) { x , v ( x ) } x = λ   a n d   φ 1 ( u 1 * ) = φ 3 ( u 3 * ) < φ 4 ( u 4 * ) { x , 0 } x = λ   a n d   φ 1 ( u 1 * ) = φ 4 ( u 4 * ) < φ 3 ( u 3 * ) v ( x ) ω d o w n < x < ω u p   a n d   φ 1 ( u 1 * ) > φ 3 ( u 3 * )   a n d   φ 3 ( u 3 * ) < φ 4 ( u 4 * ) 0 x 0   a n d   φ 3 ( u 3 * ) > φ 4 ( u 4 * )   a n d   φ 1 ( u 1 * ) > φ 4 ( u 4 * ) x x < 0
where u 1 * , u 2 * , u 3 * , and u 4 * represent the minimizers of the piecewise function, and φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , and φ 4 ( u 4 * ) , represent the minimal values of the piecewise function.
Proof of Lemma 3. 
It can be deduced from Equations (15) and (16) that p r o x λ L represents the local minimum of the following piecewise function:
φ ( u ) = { φ 1 ( u ) = δ + 1 2 λ ( u x ) 2 u > λ φ 2 ( u ) = δ + 1 2 λ ( λ x ) 2 u = λ φ 3 ( u ) = ϕ ( u ) + 1 2 λ ( u x ) 2 0 < u < λ φ 4 ( u ) = x 2 2 λ u = 0 φ 5 ( u ) = 1 2 λ ( u x ) 2 x < 0
Let v ( x ) = arg min ϕ ( u ) + 1 2 λ ( u x ) 2 ; the minimizers of the piecewise functions are u 1 * = x , u 2 * = λ , u 3 * = v ( x ) , u 4 * = 0 , and u 5 * = x , and their minimal values are φ 1 ( u 1 * ) = δ , φ 2 ( u 2 * ) = δ + 1 2 λ ( λ x ) 2 , φ 3 ( u 3 * ) = φ 3 ( v ( x ) ) , φ 4 ( u 4 * ) = x 2 2 λ , and φ 5 ( u 5 * ) = 0 , respectively. From 0 < u 3 * = v ( x ) < λ , it follows that ω d o w n < x < ω u p .
When x 0 , the stage function’s minimal values are φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , and φ 4 ( u 4 * ) ; when x < 0 , the stage function’s minimal values are φ 2 ( u 2 * ) , φ 4 ( u 4 * ) , and φ 5 ( u 5 * ) . Thus, we can observe that φ 2 ( u 2 * ) φ 1 ( u 1 * ) . If φ 1 ( u 1 * ) φ 3 ( u 3 * ) , then φ 2 ( u 2 * ) φ 1 ( u 1 * ) φ 3 ( u 3 * ) ; similarly, if φ 1 ( u 1 * ) φ 4 ( u 4 * ) , then φ 2 ( u 2 * ) φ 1 ( u 1 * ) φ 4 ( u 4 * ) . We can determine the following conclusions by comparing the values of φ 1 ( u 1 * ) , φ 2 ( u 2 * ) , φ 3 ( u 3 * ) , φ 4 ( u 4 * ) , and φ 5 ( u 5 * ) :
(1.1)
When the conditions of x > λ , φ 1 ( u 1 * ) < φ 3 ( u 3 * ) , and φ 1 ( u 1 * ) < φ 4 ( u 4 * ) are met, and it follows that u * = u 1 * = x ;
(1.2)
When the condition of x = λ is met, if φ 1 ( u 1 * ) = φ 3 ( u 3 * ) < φ 4 ( u 4 * ) is true, then u * = u 1 * = x or u * = u 3 * = v ( x ) can be derived;
(1.3)
When the condition of x = λ is met, if φ 1 ( u 1 * ) = φ 4 ( u 4 * ) < φ 3 ( u 3 * ) is true, then u * = u 1 * = x or u * = u 4 * = 0 can be derived;
(1.4)
When the conditions of ω d o w n < x < ω u p , φ 1 ( u 1 * ) > φ 3 ( u 3 * ) , and φ 3 ( u 3 * ) < φ 4 ( u 4 * ) are met, it follows that u * = u 3 * = v ( x ) ;
(1.5)
When the conditions of x 0 , φ 1 ( u 1 * ) > φ 4 ( u 4 * ) , and φ 4 ( u 4 * ) < φ 3 ( u 3 * ) are met, it follows that u * = u 4 * = 0 ;
(1.6)
When the condition of x < 0 is met, it follows that u * = u 5 * = x .
Thus, it is possible to successfully derive Equation (20). □

3.2. The Use of the Proximal Operator Algorithm to Solve Truncated Loss Functions

When ϕ ( u ) in the truncated loss function is a monotonic and non-piecewise function, and v ( x ) = arg min ϕ ( u ) + 1 2 λ ( u x ) 2 can be expressed explicitly, the proximal operator of the truncated loss function can be calculated using Formula (20). In practical applications, however, it is sometimes impossible to obtain the explicit expression for v ( x ) ; for example, ϕ L ( u ) = exp ( a y u ) a y u 1 does not provide an explicit expression. The calculation of the proximal operator in such scenarios is discussed below.
For arg min ϕ ( u ) + 1 2 λ ( u x ) 2 , if it is smooth and has a second derivative, the problem is a smooth unconstrained optimization problem. Newton’s method is used to solve for the minimum of φ ( u ) in unconstrained optimization problems due to its high convergence rate. The gradient and Hessian matrix for problem (16) can be expressed as follows:
g = ϕ ( u ) u + 1 λ ( u x )
H = 2 ϕ ( u ) u 2 + 1 λ I
The minimizer u 3 * and the minimal value φ 3 ( u 3 * ) can be obtained with Newton’s method for x .
If an explicit expression for v ( x ) cannot be achieved, the calculation of the proximal operator follows the same process as Lemma 3. Once the minimizers u 3 * and φ 3 ( u 3 * ) are obtained, the proximal operator can be calculated. When the conditions of 0 < u 3 * < λ , φ 1 ( u 1 * ) > φ 3 ( u 3 * ) , and φ 3 ( u 3 * ) < φ 4 ( u 4 * ) are met, we can derive u * = u 3 * . Therefore, Formula (20) can be modified to express the following:
p r o x λ L ( x ) = { x x > λ   a n d   φ 1 ( u 1 * ) < φ 3 ( u 3 * )   a n d   φ 1 ( u 1 * ) < φ 4 ( u 4 * ) { x , u 3 * } x = λ   a n d   φ 1 ( u 1 * ) = φ 3 ( u 3 * ) < φ 4 ( u 4 * ) { x , 0 } x = λ   a n d   φ 1 ( u 1 * ) = φ 4 ( u 4 * ) < φ 3 ( u 3 * ) u 3 * 0 < u 3 * < λ   a n d   φ 1 ( u 1 * ) > φ 3 ( u 3 * )   a n d   φ 3 ( u 3 * ) < φ 4 ( u 4 * ) 0 x 0   a n d   φ 3 ( u 3 * ) > φ 4 ( u 4 * )   a n d   φ 1 ( u 1 * ) > φ 4 ( u 4 * ) x x < 0
Based on the analysis presented above, the algorithm for solving the proximal operator of the truncated loss function is as following Algorithm 1:
Algorithm 1: Algorithm for solving the proximal operator of the truncated loss function
Input :   x , δ , λ ,   ϕ ( u )
Output :   Proximal   operator   u *
1 :   Choose   an   initial   point   u ( 0 ) = x .
2 :   While   t M A X do
3 :   Calculate   g ( t ) according to Formula (21).
4 :   If   g ( t ) < e p s ,   then   stop   the   loop ,   the   approximate   solution   u 3 * = u ( t + 1 ) is obtained.
5 :   Calculate   H ( t ) according to Formula (22).
6 :   Calculate   d ¯ ,   the   Newton   iteration   direction ,   according   to   the   Newton   iteration   equation   H ( t ) d = g ( t ) .
7 :   Solve   for   the   step   size   θ   using   the   backtracking   Armijo   method ,   and   update   u ( t ) = u ( t 1 ) + θ d ¯ .
8 : Set   t = t + 1 .
9:  End
10 : Calculate   φ ( u 3 * )   according   to   u 3 * .
11 : Calculate   the   proximal   operator   u * according to Formula (23).

4. Robust SVDD Model

Formula (1), for the SVDD formula, can be rewritten as follows:
min R , μ , ξ i   R 2 + C i [ u i ] + , u i = φ ( x i ) μ 2 R 2
where [ u ] + = max ( 0 , u ) represents the hinge loss function. Since the hinge loss function is sensitive to outliers, it can be replaced with the truncated loss function from Formula (15). Thus, the objective function for obtaining the robust SVDD model is as follows:
min R , μ , ξ i   R 2 + C i ( ϕ ( u ) ( δ ϕ ( u ) ) + ) , u i = φ ( x i ) μ 2 R 2
As the truncated loss function is non-differentiable, solving the objective function of the robust SVDD model is a non-convex optimization problem, and it cannot be solved using standard SVDD model methods.
Theorem 1 
(Nonparametric Representation Theorem [37]). Suppose we are designated a non-empty set χ ; a positive definite real-valued kernel K : χ × χ ; a training sample ( x i , y i ) ( i m ) χ × ; a strictly monotonically increasing real-valued function g on [ 0 , + ] ; an arbitrary cost function c : ( χ × 2 ) m { } ; and a class of functions:
F = { f χ | f ( ) = i = 1 m β i K ( z i , ) , β i , z i χ , f }
In this scenario, represents the norm in RKHS, H k associated with K ( , ) , i.e., for any z i χ .
i = 1 β i K ( z i , ) 2 = i = 1 j = 1 β i β j K ( z i , z j )
Then, any f F minimizing the regularized risk function
c ( ( x 1 , x 1 , y 1 , f ( x 1 ) ) , , ( x m , y m , f ( x m ) ) ) + g ( f )
admits a representation of f ( ) = i = 1 m a i y i K ( x i , ) , where a i ( i m ) represents coefficients of f in RKHS H k .
A set of vectors a exists in the nonparametric representation theorem, where the center μ = i = 1 n a i ϕ ( x i ) is the optimal solution for problem (25). Therefore, Formula (25) can be transformed into the following:
min R , u i   R 2 + C i L ( u i ) u i = K ( x i , x i ) 2 j = 1 n a j K ( x i , x j ) + j = 1 n k = 1 n a j a k K ( x j , x k ) R 2
Formula (28) represents the single-class SVDD model. To obtain data that include negative samples, these samples must be integrated into the SVDD model; then, the center is μ = i = 1 n a i y i ϕ ( x i ) , and the objective function of the robust SVDD model is as follows:
min R , μ , ξ i   R 2 + i C y i L ( u i ) u i = y i K ( x i , x i ) 2 y i j = 1 n a j y j K ( x i , x j ) + y i j = 1 n k = 1 n a j y j a k y k K ( x j , x k ) y i R 2
when a y = a . * y , it follows that μ = i = 1 n a y i ϕ ( x i ) . Problem (29) is rewritten as the following matrix form:
min R , u   R 2 + C L ϕ ( u ) u = K a 2 K b a y + a y T K a y D R 2 D , R 2 > 0
where K a = d i a g ( K ) . * y , K b = [ y 1 * K ( 1 , ) , y 2 * K ( 2 , ) , , y n * K ( n , ) ] , D = [ y 1 , y 2 , y n ] T , and C = [ C y 1 , C y 2 , C y n ] . This study discusses the use of the SVDD model as a solution for addressing data with negative samples, and the Lagrangian function for problem (30) is as follows:
L β ( R 2 , a y , u , η , β ) = R 2 + C L ϕ ( u ) + η , K a 2 K b a y + a y T K a y D R 2 D u
The KKT conditions for problem (30) are provided below:
{ K a 2 K b a y * + a y * T K a y * D R 2 * D u * = 0 0 = 1 η * T D 0 = ( 2 D a y * T K 2 K b T ) η * T 0 C L ϕ ( u * ) η *
where ( R 2 * , a y * , u * ) represents any KKT point.
The generalized non-smooth optimization problem can be represented by the following Formula [38]:
min R , u   f ( s ) + c l ( q ) q + g ( s ) = 0
where f ( ) and g ( ) are continuously differentiable functions on n , and l ( q ) represents a non-smooth function on n . Problem (31) is considered as a form of the aforementioned generalized non-smooth optimization problem, where s = [ R 2 , a y ] , q = u , f ( s ) = s 1 , and g ( s ) = K a + 2 K b s 2 s 2 T K s 2 D + s 1 D , representing the optimization model’s constraints, are nonlinear equality constraints. In this study, the fast ADMM algorithm was employed to solve problem (33), and the algorithm will be introduced in a subsequent chapter.

5. Fast ADMM Algorithm

The previous section presented the optimization mathematical model of the robust SVDD model. Since the optimization mathematical model includes nonlinear equality constraints, the fast ADMM method was used to solve Formula (33). The augmented Lagrangian function of the robust SVDD model is as follows:
L β ( R 2 , a y , u , η , β ) = R 2 + C L ϕ ( u ) + η , g ( a y ) R 2 D u + β 2 g ( a y ) R 2 D u 2
where λ m represents the vector of Lagrange multipliers, and β > 0 represents the penalty factor, g ( a y ) = K a 2 K b a y + a y T K a y D .

5.1. Fast ADMM Algorithm Framework

Based on the augmented Lagrangian function previously presented, we obtained the following framework for the fast ADMM algorithm:
u k + 1 = arg min u m L β ( R 2 k , a y k , u , η k )
a y k + 1 = arg min a y m L β ( R 2 k , a y , u k + 1 , η k ) + Δ φ 1 ( a y , a y k )
R 2 k + 1 = arg min R 2 m L β ( R 2 , a y k + 1 , u k + 1 , η k ) + Δ φ 2 ( R 2 , R 2 k )
η k + 1 = η k + β ( g ( a y k + 1 ) R 2 k + 1 D u k + 1 )
1. 
Computing  u k + 1
The solution of u k + 1 in Formula (35) is equivalent to the following problem:
u k + 1 = arg min u m C L ϕ ( u ) + β 2 ( g ( a y k ) R 2 k D + η k β ) u 2 = arg min u m C L ϕ ( u ) + β 2 z k u 2 = p r o x c β L ϕ ( z k )
where z k = g ( a y k ) R 2 k D + η k β . Therefore, the calculation of u k + 1 can be transformed into the computation of the proximal operator, which can be determined using Formula (20) or Algorithm 1.
2. 
Computing a y k + 1
When solving for variable a y k + 1 , if a closed-form solution for this subproblem is not obtained, an optimization algorithm must be used for the iterative solution, which results in a slow computational speed. Thus, to solve this problem in a more efficient manner, we used a linearization technique.
Given the convex differentiable function φ , the Bregman distance between x and y , x , y d o m ( φ ) is defined as follows:
Δ φ ( x , y ) = φ ( x ) φ ( y ) ( φ ( y ) , x y )
When φ ( x ) = μ 2 x 2 β 2 g ( x ) R 2 k D u k + 1 + η k β 2 , the Bregman distance Δ φ ( x , y ) is as follows:
Δ φ ( x , y ) = μ 2 x y 2 β 2 g ( x ) R 2 k D u k + 1 + η k β 2 + β 2 g ( y ) R 2 k D u k + 1 + η k β 2 + β g ( y ) ( g ( y ) R 2 k D u k + 1 + η k β ) , x y
Formula (36) is equivalent to the following:
a y k + 1 = arg min a y m μ 2 a y a y k 2 + β 2 g ( a y k ) R 2 k D u k + 1 + η k β 2 + β g ( a y k ) ( g ( a y k ) R 2 k D u k + 1 + η k β ) , a y a y k
With the use of Formula (42), we obtain
a y k + 1 = a y k β g ( a y k ) μ ( g ( a y k ) R 2 k D u k + 1 + η k β )
where μ β g ( a y k ) 2 .
3. 
Computing  R 2 k + 1
If φ ( x ) = 1 2 x 2 , based on Formula (40), we obtain
Δ φ 2 ( R 2 , R 2 k ) = 1 2 R 2 R 2 k
Formula (37) is equivalent to
R 2 k + 1 = arg min R 2 m R 2 + β 2 g ( a y k + 1 ) R 2 D u k + 1 + η k β 2 + 1 2 R 2 R 2 k 2
Formula (45) represents a convex quadratic programming problem. By solving the following Equation (46), we also solve Formula (45):
0 = 1 β D T ( g ( a y k + 1 ) R 2 D u k + 1 + η k β ) + R 2 R 2 k
The value of R 2 k + 1 is directly updated using the following formula:
R 2 k + 1 = ( β D T ( g ( a y k + 1 ) u k + 1 + η k β ) 1 + R 2 k ) ( 1 + β D T D )
4. 
Computing  η k + 1
η k + 1 can be calculated using Formula (38). When η k + 1 = 0 , the Lagrange multiplier is removed.
Base on above analysis, the framework of our method can be summarized in Algorithm 2.
Algorithm 2: Fast ADMM Algorithm
Input :   K a ,   K b ,   D
Output :   ( R 2 k , a y k )
1 :   Take   an   initial   point   ( R 2 0 , a y 0 , u 0 , η 0 ) ,   and   set   β   and   k M A X .
2 :   While   the   stop   condition   is   not   satisfied   and   k k M A X perform the following
3 :   Calculate   a y k according to Formula (43).
4 :   Calculate   R 2 k according to Formula (47).
5 :   If   the   proximal   operator   of   ϕ ( u ) has an explicit expression, use Formula (20) for the
calculation   u k
6 :   If   the   proximal   operator   of   ϕ ( u ) does not have an explicit expression, use
Algorithm   1   for   the   calculation   u k
7 :   Calculate   η k according to Formula (38).
8 :   Update   k = k + 1 .
9: End
10: Output     the   final   solution   ( R 2 k , a y k ) .

5.2. Global Convergence Analysis of the Fast ADMM Algorithm

In this section, we provide a convergence analysis of the fast ADMM algorithm. Specifically, the convergence of the algorithm is discussed using Lemma 7, and Lemma 7 is proven. According to Formulas (35)–(38), it can be observed that the optimality conditions for each update iteration of the fast ADMM algorithm can be written as follows:
{ 0 C L ϕ ( u k + 1 ) η k β ( g ( a y k ) R 2 k D u k + 1 ) 0 = μ ( a y k + 1 a y k ) β g ( a y k ) ( g ( a y k ) R 2 k D u k + 1 + η k β ) 0 = 1 D T η k β D T ( g ( a y k + 1 ) R 2 k + 1 D u k + 1 ) + R 2 k + 1 R 2 k η k + 1 = η k + β ( g ( a y k + 1 ) R 2 k + 1 D u k + 1 )
where g ( a y ) = 2 D a y T K 2 K b T .
Lemma 4. 
Assume w k = ( R 2 k , a y k , u k , η k ) is a sequence generated using the fast ADMM algorithm; then, for any k 1 , we obtain the following:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k + 1 , u k + 1 , η k ) β 2 R 2 k + 1 D R 2 k D 2 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2
where λ min ( D D T ) represents the strictly positive minimum eigenvalue of D D T .
Proof of Lemma 4. 
According to the optimality conditions of the iteration, the following equation is relevant:
D T η k + 1 = 1 + R 2 k + 1 R 2 k
According to the Cauchy–Schwarz inequality, it follows that
λ min ( D D T ) η k + 1 η k 2 D T η k + 1 D T η k 2 = R 2 k + 1 R 2 k ( R 2 k R 2 k 1 ) 2 R 2 k + 1 R 2 k 2 + R 2 k R 2 k 1 2
Thus,
1 β η k + 1 η k 2 1 λ min ( D D T ) β R 2 k + 1 R 2 k 2 + 1 λ min ( D D T ) β R 2 k R 2 k 1 2 .
Lemma 5. 
Assume w k = ( R 2 k , a y k , u k , η k ) is a sequence generated using the fast ADMM algorithm; then, for any k 1 , the following is true:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k + 1 , u k + 1 , η k ) β 2 R 2 k + 1 D R 2 k D 2 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2
Proof of Lemma 5. 
From the definition of the augmented Lagrangian function presented in Formula (34), it can be argued that
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k + 1 , u k + 1 , η k ) = R 2 k + 1 R 2 k + η k , ( R 2 k R 2 k + 1 ) D + β 2 g ( a y k + 1 ) R 2 k + 1 D u k + 1 2 β 2 g ( a y k + 1 ) R 2 k D u k + 1 2 + 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2
Formula (48) shows the following:
  g ( a y k + 1 ) R 2 k D u k + 1 = 1 β ( η k + 1 η k ) + ( R 2 k + 1 D R 2 k D )
Therefore,
β 2 φ ( a y k + 1 ) R 2 k D u k + 1 2 = β 2 1 β ( η k + 1 η k ) + ( R 2 k + 1 D R 2 k D ) 2 = 1 2 β η k + 1 η k 2 + β 2 R 2 k + 1 D R 2 k D 2 + η k + 1 η k , R 2 k + 1 D R 2 k D
By substituting Formula (53) into (51), the following is achieved:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k + 1 , u k + 1 , η k ) = R 2 k + 1 R 2 k + η k , ( R 2 k R 2 k + 1 ) D β 2 R 2 k + 1 D R 2 k D 2 η k + 1 η k , R 2 k + 1 D R 2 k D + 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2 = β 2 R 2 k + 1 D R 2 k D 2 + 1 D T η k + 1 , R 2 k + 1 R 2 k + 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2 = β 2 R 2 k + 1 D R 2 k D 2 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2
Thus, the proof is completed. □
Lemma 6. 
Assume w k = ( R 2 k , a y k , u k , η k ) is a sequence generated using the fast ADMM algorithm; then, for k 1 , we obtain the following:
L β ( R 2 k , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k , u k + 1 , η k ) + β g ( a y k ) 2 2 a y k a y k 1 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2
Proof of Lemma 6. 
The definition of the augmented Lagrangian function in Formula (34) shows the following:
L β ( R 2 k , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k , u k + 1 , η k ) = β 2 g ( a y k ) R 2 k D u k + 1 + η k β 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2 β 2 g ( a y k 1 ) R 2 k D u k + 1 + η k β 2 β g ( a y k 1 ) ( g ( a y k 1 ) R 2 k D u k + 1 + η k β ) , a y k a y k 1 = β 2 g ( a y k ) g ( a y k 1 ) 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2 + β g ( a y k 1 ) R 2 k D u k + 1 + η k β , g ( a y k ) g ( a y k 1 ) β g ( a y k 1 ) ( g ( a y k 1 ) R 2 k D u k + 1 + η k β ) , a y k a y k 1
According to the first-order condition for convex functions, and since g ( a y ) is a convex function, we obtain the following:
g ( a y k + 1 ) g ( a y k ) g ( a y k + 1 ) , a y k + 1 a y k
The substitution of Formula (56) yields the following:
L β ( R 2 k , a y k + 1 , u k + 1 , η k ) L β ( R 2 k , a y k , u k + 1 , η k ) = β 2 g ( a y k ) g ( a y k 1 ) 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2 + β g ( a y k 1 ) R 2 k D u k + 1 + η k β , g ( a y k ) g ( a y k 1 ) β g ( a y k 1 ) ( g ( a y k 1 ) R 2 k D u k + 1 + η k β ) , a y k a y k 1 β 2 g ( a y k ) g ( a y k 1 ) 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2 β g ( a y k ) 2 2 a y k a y k 1 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1 2
Thus, the proof is complete. □
Lemma 7. 
Assume w k = ( R 2 k , a y k , u k , η k ) is a sequence generated using the fast ADMM algorithm; if μ β g ( a y k ) 2 and 1 λ min ( D D T ) β 1 2 , then for any k 1 , we obtain the following:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k + 1 ) L β ( R 2 k , a y k , u k , η k ) μ 2 a y k + 1 a y k 2 β 2 R 2 k + 1 D R 2 k D 2
Proof of Lemma 7. 
The definition of the augmented Lagrangian function in Formula (36) shows the following:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k + 1 ) = L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) + η k + 1 η k , φ ( a y k + 1 ) R 2 k + 1 D u k + 1 = L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) + 1 β η k + 1 η k 2
Since L β ( R 2 k , a y k , u , η k ) is a global minimum with respect to the variable u k + 1 , then
L β ( R 2 k , a y k , u k + 1 , η k ) L β ( R 2 k , a y k , u k , η k )
The combination of Lemmas 5 and 6 produces the following:
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k + 1 ) = L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k ) + 1 β η k + 1 η k 2 = L β ( R 2 k , a y k + 1 , u k + 1 , η k ) β 2 R 2 k + 1 D R 2 k D 2 + 1 β η k + 1 η k 2 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2   L β ( R 2 k , a y k , u k , η k ) β 2 R 2 k + 1 D R 2 k D 2 1 2 R 2 k + 1 R 2 k 2 1 2 R 2 k R 2 k 1 2 + 1 β η k + 1 η k 2 + β g ( a y k ) 2 2 a y k a y k 1 2 μ 2 a y k + 1 a y k 2 μ 2 a y k a y k 1
When μ β g ( a y k ) 2 and 1 λ min ( D D T ) β 1 2 , it follows that
L β ( R 2 k + 1 , a y k + 1 , u k + 1 , η k + 1 ) L β ( R 2 k , a y k , u k , η k ) μ 2 a y k + 1 a y k 2 β 2 R 2 k + 1 D R 2 k D 2
When the conditions of μ β g ( a y k ) 2 and 1 λ min ( D D T ) β 1 2 are met, according to Lemma 7, L β ( R 2 , a y , u , η ) monotonically decreases, thus causing the fast ADMM algorithm to converge. □

5.3. Fast ADMM Algorithm Termination Conditions

Based on Formula (35), it can be observed that the u k + 1 that minimizes L β ( R 2 k , a y k , u , η k ) can be derived as follows:
0 C L ϕ ( u k + 1 ) β ( g ( a y k ) R 2 k D u k + 1 + η k β ) = C L ϕ ( u k + 1 ) η k + 1 β ( g ( a y k + 1 ) g ( a y k ) ) β ( R 2 k + 1 D R 2 k D )
where
β ( g ( a y k + 1 ) g ( a y k ) ) + β ( R 2 k + 1 D R 2 k D ) C L ϕ ( u k + 1 ) η k + 1
Thus, we obtain the following equation:
S 1 k + 1 = β ( g ( a y k + 1 ) g ( a y k ) ) + β ( R 2 k + 1 D R 2 k D )
Equation (60) represents the residual value of the dual feasibility condition. Hence, S 1 k + 1 represents the dual residual of iteration.
  r 1 k + 1 = K a 2 K b a y k + 1 + a y k + 1 T K a y k + 1 D R 2 k + 1 D u k + 1
  r 2 k + 1 = 1 η k + 1 T D
  r 3 k + 1 = ( 2 D a y k + 1 T K 2 K b T ) η k + 1 T
Equations (61)–(63) represent the primal residuals of iteration. The KKT conditions of problem (32) consist of four components, corresponding to the primal and dual residuals. These two types of residuals gradually converge to zero with the use of the fast ADMM algorithm.
Both the primal and dual residuals must be sufficiently small, meeting the conditions presented below, for the termination of the use of the fast ADMM algorithm:
max { r 1 k max { 2 K b a y + a y T K a y D R 2 D , u , K a } , S 1 k η k T , r 2 k max { 1 , η k T D } , r 3 k max { 2 D a y k + 1 T K 2 K b T , 2 η k + 1 T } } ε t o l
where ε t o l > 0 . Typically, ε t o l is selected so that 10 - 4 ε t o l 10 - 3 ; in this study, ε t o l = 10 - 3 . This ensures that the algorithm terminates when the solution’s accuracy is within an acceptable range.

6. Experiment

In this section, we describe extensive experiments conducted on various datasets to validate the effectiveness and robustness of the three truncated loss function SVDD algorithms proposed in this study, which are presented below. The experimental datasets consisted of synthetic and several UCI datasets. We also compared hinge loss SVDD [8], DW-SVDD [20], R-SVDD [19], and GL-SVDD [16] algorithms to validate their performance.
  • ϕ R -SVDD: SVDD algorithm with a truncated generalized ramp loss function;
  • ϕ f -SVDD: SVDD algorithm with a truncated binary cross entropy loss function;
  • ϕ L -SVDD: SVDD algorithm with a truncated linear exponential loss function.

6.1. Experimental Setup

This subsection introduces the evaluation metrics, kernels, and parameter settings used in the experiments.

6.1.1. Evaluation Metrics

We used G-mean and F 1 -score as evaluation metrics to assess the performance of the different methods described in this study [30,39,40,41]. They are based on TP, FN, FP, and TN, where TP denotes the number of target data predicted as target data, FN denotes the number of target data predicted as outlier data, FP denotes the number of outlier data predicted as target data, and TN denotes the number of outlier data predicted as outlier data.
G m e a n = Re c a l l × S p e c i f i c i t y
F 1 s c o r e = 2 × Pr e c i s i o n × Re c a l l Pr e c i s i o n + Re c a l l
where R e c a l l = T P / ( T P + F N ) , S p e c i f i c i t y = T N / ( T N + F P ) , and P r e c i s i o n = T P / ( T P + F P ) . It can be seen from (65) and (66) that G-mean can provide a good balance between recall and specificity, and F 1 -score provides a good balance between precision and recall. Hence, both of them can measure the performance of the proposed method, comprehensively.

6.1.2. Kernels

The Gaussian kernel function has the best mapping ability and is the most practical method, commonly used to handle the classification problems of nonlinear data, and we used it when conducting our experiments. The definition of the Gaussian kernel function is as follows:
K ( x i , x j ) = exp ( x i x j 2 2 σ 2 )

6.1.3. Parameter Configuration

To ensure a fair comparison, the parameters for each method were selected using the grid search method. For all methods, the regularization parameter c was selected from { 0.05 , 0.1 , 0.3 , 0.5 , 0.7 , 0.9 , 1 , 2 , 5 , 10 } . Arin Chaudhuri [42] has demonstrated that there exists a σ in the interval [ min { x i x j 2 , i j } / 3 , max { x i x j 2 , i j } / 3 ] that maximizes the SVDD optimization objective function. Therefore, the range for the kernel parameter σ can be set as: [ min { x i x j 2 , i j } / 3 ,   max { x i x j 2 , i j } / 3 ] .
The DW-SVDD and GL-SVDD algorithms locate the k-nearest neighbors in the feature space, with the parameter k selected from { 5 , 10 , 15 , 20 } . All of the truncated loss function SVDD algorithms select the parameter λ from { 0.1 , 0.2 , 0.5 , 1 } , ϕ R -SVDD selects parameter v from { 0.1 , 0.3 , 0.5 , 1 , 5 } , and ϕ f -SVDD selects parameter θ from { 0.1 , 0.3 , 0.5 , 1 , 5 } . The parameter a for ϕ L -SVDD is selected from { 1 , 5 , 10 } .

6.2. Synthetic Datasets with Noise

In this study, we constructed two synthetic datasets: circular and banana-shaped, to test the robustness of the three proposed truncated loss function support vector data description (SVDD) algorithms. To verify the robustness of the algorithms in handling different types of noise, we introduced two types of noise:
  • Neighboring noise: Noise points are randomly distributed near the normal samples but do not completely overlap with the normal samples, forming a more discrete distribution characteristic. This noise simulates the common boundary ambiguity in practical applications.
  • Regional noise: Noise points are randomly distributed within a specified area, forming sparse clusters. This noise simulates possible local anomalies in practical applications.
The description of the synthetic dataset is given below:
  • Circular Dataset
    • Normal samples: This consists of 300 two-dimensional sample points distributed within a concentric circle, forming a normal distribution pattern.
    • Noise samples: This consists of 40 noise points, divided into two types: 20 noise points randomly distributed near the normal sample points, showing a discrete distribution characteristic, and another 20 noise points randomly distributed within a specified area, forming sparse clusters.
    • Illustration: Figure 1a shows the circular dataset, where blue points represent normal samples, and red points represent noise.
  • Banana-shaped Dataset
    • Normal samples: This consists of 300 two-dimensional sample points distributed along curved lines, resembling a banana shape.
    • Noise samples: This consists of 40 noise points, divided into two types: 10 noise points randomly distributed near the banana-shaped normal sample points, and another 30 noise points randomly distributed within a specified area, forming sparse clusters.
    • Illustration: Figure 1b shows the banana-shaped dataset, where blue points represent normal samples, and red points represent noise samples.
By choosing these noise distributions and quantities, we aimed to simulate different real-world noise scenarios and test the robustness of the proposed algorithms under varying noise conditions.
For circular outliers, the classification results of SVDD, DW-SVDD, DBSCAN, ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD algorithms for this dataset are presented in Figure 2. We can observe that the SVDD algorithm accurately identifies seven noise points as outliers, while misclassifying thirty-three noise points as normal values. Therefore, the performance of SVDD is severely affected by noise. Unlike SVDD, DW-SVDD accurately identified 22 noise points as outliers, while misclassifying 18 clustered noise points as normal values. DW-SVDD is based on a weighted idea where outliers are commonly sparse data; hence, a smaller weight is designated to these sparse data, which are then excluded from the sphere. The weighted method struggles to exclude clustered noise since 20 clustered noise points are present in the circular outliers. DBSCAN accurately identified the 20 sparsely distributed noise points but incorrectly classified the 20 clustered noise points as normal values. Therefore, DBSCAN cannot accurately identify clustered noise. The ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD algorithms successfully identified all of the noise points, and their classification boundaries encompassed all of the target samples.
The classification results of the SVDD, DW-SVDD, DBSCAN, ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD algorithms for the banana-shaped dataset are presented in Figure 3. It can be observed that SVDD only identifies two noise points, while misclassifying the remaining noise points as normal values. DW-SVDD accurately differentiated nine noise points distributed around the banana data, while misclassifying the remaining noise points as normal values. ϕ R -SVDD accurately identified 35 noise points as outliers, while misclassifying the remaining noise points as normal values, and erroneously classifying 16 target values as outliers. ϕ f -SVDD accurately identified 36 noise points as outliers, while misclassifying the remaining noise points as normal values, and erroneously classifying 14 target values as outliers. ϕ L -SVDD accurately identified 38 noise points as outliers, while misclassifying the remaining noise points as normal values, and erroneously classifying 13 target values as outliers.
From the experiments on the two synthetic datasets, it can be seen that the DBSCAN algorithm is very effective in handling obvious noise, but it cannot accurately identify noise when the noise is less obvious or close to normal data. DW-SVDD and DBSCAN methods cannot accurately identify sparse clustered noise, whereas ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD can effectively identify clustered noise in the dataset. Figure 1 and Figure 2 show that the classification boundaries of ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD are tighter and smoother than SVDD and DW-SVDD methods. Therefore, it can be determined that the three truncated loss function SVDD algorithms proposed in this study are more robust to noise in both synthetic datasets.

6.3. UCI Datasets with the Presence of Noise

To further validate the effectiveness of the proposed method, we used eight standard datasets obtained from the UCI machine learning repository to conduct our experiments [43]. For each standard dataset, one class of samples was used as normal data, and the remaining classes were used as outlier data. Moreover, to eliminate the impact of data scale, each feature of every dataset was normalized to [ 0 , 1 ] . Grid parameter optimization was used to determine the optimal parameters in all of the experiments, and the average value of ten repetitions of each experiment was the final result. For each dataset, parts of the target and non-target data were randomly selected for the training set, and the remaining target and non-target data were selected for the test set.

6.3.1. Training Dataset without Non-Target Data

For each dataset, 70% of the normal data were randomly selected for the training set, and noise-contaminated data were added to the training set. Noise data were created by changing the labels of non-target data from −1 to 1, with added noise data proportions of 10%, 20%, 30%, 40%, and 50% non-target data, and the test set consisted of the remaining target and non-target data.
Table 1 presents the G-mean averages for different methods across 10 trials, with the best results presented in bold. In 40 experimental datasets, ϕ R -SVDD and ϕ f -SVDD achieved higher G-means on 35 datasets compared to benchmark methods, while ϕ L -SVDD achieved higher G-means on 34 datasets. As the proportion of noise-contaminated data increases, i.e., more data are contaminated by noise, the classification accuracy of all models generally declines. It can be observed that, as the proportion of noise-contaminated data increases, the ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD algorithms show less of a decline in accuracy compared to the other algorithms. Taking the Iris dataset as an example, when the proportion of noise-contaminated data r increases from 0.1 to 0.5, the G-means of ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD decline by 13.02%, 13.02%, and 11.5%, while those of the DW-SVDD, R-SVDD, and GLE-SVDD decline by 19.55%, 16.6%, and 17.7%, respectively.
Table 2 shows the F 1 -score averages of different methods over 10 trials. In the 40 experimental datasets, ϕ R -SVDD and ϕ f -SVDD achieved higher F 1 -scores than the compared methods in 32 datasets, while ϕ L -SVDD had higher F 1 -scores in 31 datasets. ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD demonstrate higher classification accuracy in most test datasets compared to current noise-resistant SVDD models (i.e., DW-SVDD, R-SVDD, and GL-SVDD), indicating that the use of a truncated loss function makes SVDD less sensitive to noise.

6.3.2. Training Datasets with Non-Target Data

For each dataset, the training set was randomly selected to consist of 70% target and 20% non-target data, with added proportions of 10%, 20%, 30%, 40%, and 50% non-target data as noise data, and the test set included the remaining target and non-target data.
Table 3 presents the G-means from 10 trials for different methods, with the best results presented in bold. In 40 experimental datasets, ϕ R -SVDD and ϕ f -SVDD achieved higher G-means on 35 datasets compared to the benchmark methods, while ϕ L -SVDD achieved a higher G-mean on 33 datasets. As the data contaminated by noise increase, the ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD algorithms exhibit less of a decline in accuracy than the other algorithms. ϕ R -SVDD, ϕ f -SVDD, and ϕ L -SVDD present almost no decline in G-means for datasets such as Blood, Iris, and Haberman.
Table 4 shows the F 1 -score averages of different methods over 10 trials. In the 40 experimental datasets, ϕ R -SVDD achieved higher F 1 -scores than the compared methods in 35 datasets, ϕ f -SVDD performed better in 32 datasets, and ϕ L -SVDD in 31 datasets. From Table 1, Table 2, Table 3 and Table 4, it is evident that -SVDD, -SVDD, and -SVDD achieve better experimental performance than the other four methods, demonstrating superior noise resistance due to the use of the truncated loss function.

7. Conclusions

This study aimed to enhance the robustness and effectiveness of the SVDD algorithm. We propose a general framework for the truncated loss function of this algorithm, which uses bounded loss functions to mitigate the impact of outliers. Due to the non-differentiability of the truncated loss function, we employed the fast ADMM algorithm to solve the SVDD model with the truncated loss function, which handles truncated loss functions within a unified framework. In this context, the truncated generalized ramp, truncated binary cross entropy, and truncated linear exponential loss functions for the SVDD algorithm were constructed, and extensive experiments show that these three SVDD models exhibit more robustness than other SVDD models in most cases. However, this method still has the following shortcomings. Firstly, introducing the truncated loss function increases the complexity of model training, as some truncated loss functions cannot directly provide explicit expressions for neighboring point operators, requiring additional computational overhead. These factors may limit the application of the method to large-scale datasets. To overcome these limitations, future work can consider using a distributed computing framework to accelerate the training process of the ADMM algorithm. Secondly, the truncated loss function SVDD introduces new free parameters, which increases the time required for grid search parameter selection. When the data scale is large, the computation time for the grid search method may become unacceptable. To address this drawback, algorithms such as Bayesian optimization can be considered in the future to find the optimal parameters, further improving model performance and optimization efficiency. Finally, for extremely noisy data, the truncated loss function may not completely eliminate its impact, and the effect is limited. In this case, methods combining clustering algorithms such as DBSCAN can be adopted. First, clustering algorithms like DBSCAN can be used to preprocess the data and remove noise, and then the proposed method can be used to detect anomalies.

Author Contributions

H.C.: conceptualization, methodology, software, and writing—original draft preparation. Y.L.: conceptualization, resources, writing—review and editing, supervision, and funding acquisition. J.S.: conceptualization, resources, writing—review and editing, and funding acquisition. W.Z.: conceptualization, methodology, formal analysis, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62373301 and 62173277; the Natural Science Foundation of Shaanxi Province, grant number 2023-JC-YB-526; the Aeronautical Science Foundation of China, grant number 20220058053002; and the Shaanxi Province Key Laboratory of Flight Control and Simulation Technology.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

This work is supported by the Shaanxi Province Key Laboratory of Flight Control and Simulation Technology.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 2009, 41, 15. [Google Scholar] [CrossRef]
  2. Pimentel, M.A.; Clifton, D.A.; Clifton, L.; Tarassenko, L. A review of novelty detection. Signal Process. 2014, 99, 215–249. [Google Scholar] [CrossRef]
  3. Lei, Y.; Yang, B.; Jiang, X.; Jia, F.; Li, N.; Nandi, A.K. Applications of machine learning to machine fault diagnosis: A review and roadmap. Mech. Syst. Signal Process. 2020, 138, 106587. [Google Scholar] [CrossRef]
  4. Hasani, R.; Wang, G.; Grosu, R. A machine learning suite for machine components’ health-monitoring. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 9472–9477. [Google Scholar]
  5. Khan, S.S.; Madden, M.G. One-class classification: Taxonomy of study and review of techniques. Knowl. Eng. Rev. 2014, 29, 345–374. [Google Scholar] [CrossRef]
  6. Khan, S.S.; Madden, M.G. A survey of recent trends in one class classification. In Proceedings of the 20th Annual Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, 19–21 August 2009; pp. 188–197. [Google Scholar] [CrossRef]
  7. Alam, S.; Sonbhadra, S.K.; Agarwal, S.; Nagabhushan, P. One-class support vector classifiers: A survey. Knowl.-Based Syst. 2020, 196, 105754. [Google Scholar] [CrossRef]
  8. Tax, D.M.J.; Duin, R.P.W. Support vector data description. Mach. Learn. 2004, 54, 45–66. [Google Scholar] [CrossRef]
  9. Zheng, S. A fast iterative algorithm for support vector data description. Int. J. Mach. Learn. Cybern. 2019, 10, 1173–1187. [Google Scholar] [CrossRef]
  10. Turkoz, M.; Kim, S.; Son, Y.; Jeong, M.K.; Elsayed, E.A. Generalized support vector data description for anomaly detection. Pattern Recognit. 2020, 100, 107119. [Google Scholar] [CrossRef]
  11. Fong, S.; Narasimhan, S. An Unsupervised Bayesian OC-SVM Approach for Early Degradation Detection, Thresholding, and Fault Prediction in Machinery Monitoring. IEEE Trans. Instrum. Meas. 2022, 71, 3500811. [Google Scholar] [CrossRef]
  12. Breunig, M.M.; Kriegel, H.P.; Ng, R.T.; Sander, J. Lof: Identifying density-based local outliers. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 15–18 May 2000; pp. 93–104. [Google Scholar]
  13. Zheng, L.; Hu, W.; Min, Y. Raw Wind Data Preprocessing: A Data-Mining Approach. IEEE Trans. Sustain. Energy 2015, 6, 11–19. [Google Scholar] [CrossRef]
  14. Khan, S.S.; Karg, M.E.; Kulic, D.; Hoey, J. X-factor HMMs for detecting falls in the absence of fall-specific training data. In Proceedings of the Ambient Assisted Living and Daily Activities: 6th International Work-Conference, IWAAL 2014, Belfast, UK, 2–5 December 2014; pp. 1–9. [Google Scholar]
  15. Andreou, C.; Karathanassi, V. Estimation of the Number of Endmembers Using Robust Outlier Detection Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 247–256. [Google Scholar] [CrossRef]
  16. Lu, J.; Gao, Y.; Zhang, L. A novel dynamic radius support vector data description based fault diagnosis method for proton exchange membrane fuel cell systems. Int. J. Hydrogen Energy 2022, 47, 35825–35837. [Google Scholar] [CrossRef]
  17. Zhao, Y.-P.; Xie, Y.-L.; Ye, Z.-F. A new dynamic radius SVDD for fault detection of aircraft engine. Eng. Appl. Artif. Intell. 2021, 100, 104177. [Google Scholar] [CrossRef]
  18. Zhu, F.; Yang, J.; Gao, C.; Xu, S.; Ye, N.; Yin, T. A weighted one-class support vector machine. Neurocomputing 2016, 189, 1–10. [Google Scholar] [CrossRef]
  19. Chen, G.; Zhang, X.; Wang, Z.J. Robust support vector data description for outlier detection with noise or uncertain data. Knowl.-Based Syst. 2015, 90, 129–137. [Google Scholar] [CrossRef]
  20. Cha, M.; Kim, J.S.; Baek, J.G. Density weighted support vector data description. Expert Syst. Appl. 2014, 41, 3343–3350. [Google Scholar] [CrossRef]
  21. Sadeghi, R.; Hamidzadeh, J. Automatic support vector data description. Soft Comput. 2018, 22, 147–158. [Google Scholar] [CrossRef]
  22. Hu, W.; Hu, T.; Wei, Y.; Lou, J.; Wang, S. Global Plus Local Jointly Regularized Support Vector Data Description for Novelty Detection. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6602–6614. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, Y.-P.; Huang, G.; Hu, Q.-K.; Li, B. An improved weighted one class support vector machine for turboshaft engine fault detection. Eng. Appl. Artif. Intell. 2020, 94, 103796. [Google Scholar] [CrossRef]
  24. Wang, K.; Lan, H. Robust support vector data description for novelty detection with contaminated data. Eng. Appl. Artif. Intell. 2020, 91, 103554. [Google Scholar] [CrossRef]
  25. Xing, H.-J.; Li, L.-F. Robust least squares one-class support vector machine. Pattern Recognit. Lett. 2020, 138, 571–578. [Google Scholar] [CrossRef]
  26. Xiao, Y.; Wang, H.; Xu, W. Ramp Loss based robust one-class SVM. Pattern Recognit. Lett. 2017, 85, 15–20. [Google Scholar] [CrossRef]
  27. Tian, Y.; Mirzabagheri, M.; Bamakan, S.M.H. Ramp loss one-class support vector machine; A robust and effective approach to anomaly detection problems. Neurocomputing 2018, 310, 223–235. [Google Scholar] [CrossRef]
  28. Xing, H.; Ji, M. Robust one-class support vector machine with rescaled hinge loss function. Pattern Recognit. 2018, 84, 152–164. [Google Scholar] [CrossRef]
  29. Zhong, G.; Xiao, Y.; Liu, B.; Zhao, L.; Kong, X. Pinball loss support vector data description for outlier detection. Appl. Intell. 2022, 52, 16940–16961. [Google Scholar] [CrossRef]
  30. Zheng, Y.; Wang, S.; Chen, B. Robust one-class classification with support vector data description and mixed exponential loss function. Eng. Appl. Artif. Intell. 2023, 122, 106153. [Google Scholar] [CrossRef]
  31. Le Thi, H.A.; Pham Dinh, T. DC programming and DCA: Thirty years of developments. Math. Program. 2018, 169, 5–68. [Google Scholar] [CrossRef]
  32. Liu, J.; Pang, J.S. Risk-based robust statistical learning by stochastic difference-of convex value-function optimization. Oper. Res. 2023, 71, 397–414. [Google Scholar] [CrossRef]
  33. Yuille, A.L.; Rangarajan, A. The concave-convex procedure. Neural Comput. 2003, 15, 915–936. [Google Scholar] [CrossRef]
  34. Tao, Q.; Wu, G.; Chu, D. Improving sparsity and scalability in regularized nonconvex truncated-loss learning problems. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 2782–2793. [Google Scholar] [CrossRef]
  35. Wang, H.J.; Shao, Y.H.; Xiu, N.H. Proximal operator and optimality conditions for ramp loss SVM. Optim. Lett. 2022, 16, 999–1014. [Google Scholar] [CrossRef]
  36. Gong, P.; Zhang, C.; Lu, Z. A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013; pp. 696–704. [Google Scholar]
  37. Scholkopf, B.; Herbrich, R.; Smola, A.J. A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory, Amsterdam, The Netherlands, 16–19 July 2001; pp. 416–426. [Google Scholar] [CrossRef]
  38. Guan, L.; Qiao, L.; Li, D.; Sun, T.; Ge, K.; Lu, X. An efficient ADMM-based algorithm to nonconvex penalized support vector machines. In Proceedings of the IEEE International Conference on Data Mining Workshops (ICDMW), Singapore, 17–20 November 2018; pp. 1209–1216. [Google Scholar]
  39. Wu, M.; Ye, J. A small sphere and large margin approach for novelty detection using training data with outliers. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2088–2092. [Google Scholar] [CrossRef] [PubMed]
  40. Xing, H.; Liu, Y.; He, Z. Robust sparse coding for one-class classification based on correntropy and logarithmic penalty function. Pattern Recognit. 2021, 111, 107685. [Google Scholar] [CrossRef]
  41. Zheng, Y.; Wang, S.; Chen, B. Multikernel correntropy based robust least squares one-class support vector machine. Neurocomputing 2023, 545, 126324. [Google Scholar] [CrossRef]
  42. Chaudhuri, A.; Sadek, C.; Kakde, D.; Wang, H.; Hu, W.; Jiang, H.; Kong, S.; Liao, Y.; Peredriy, S. The trace kernel bandwidth criterion for support vector data description. Pattern Recognit. 2021, 111, 107662. [Google Scholar] [CrossRef]
  43. Dua, D.; Graff, C. UCI Machine Learning Repository. 2008. University of California, Irvine, School of Information and Computer Sciences. Available online: https://archive.ics.uci.edu/ml (accessed on 20 July 2024).
Figure 1. Two synthetic datasets, where blue samples are normal and the samples marked with a red cross represent noise. (a) Circular dataset. (b) Banana-shaped dataset.
Figure 1. Two synthetic datasets, where blue samples are normal and the samples marked with a red cross represent noise. (a) Circular dataset. (b) Banana-shaped dataset.
Entropy 26 00628 g001
Figure 2. Classification results of six methods for circular dataset: (a) SVDD, (b) DW-SVDD, (c) DBSCAN, (d) ϕ R -SVDD, (e) ϕ f -SVDD, and (f) ϕ L -SVDD.
Figure 2. Classification results of six methods for circular dataset: (a) SVDD, (b) DW-SVDD, (c) DBSCAN, (d) ϕ R -SVDD, (e) ϕ f -SVDD, and (f) ϕ L -SVDD.
Entropy 26 00628 g002
Figure 3. Classification results of six methods for banana-shaped dataset: (a) SVDD, (b) DW-SVDD, (c) DBSCAN, (d) ϕ R -SVDD, (e) ϕ f -SVDD, and (f) ϕ L -SVDD.
Figure 3. Classification results of six methods for banana-shaped dataset: (a) SVDD, (b) DW-SVDD, (c) DBSCAN, (d) ϕ R -SVDD, (e) ϕ f -SVDD, and (f) ϕ L -SVDD.
Entropy 26 00628 g003aEntropy 26 00628 g003b
Table 1. G-mean values for different methods for UCI datasets with noise (not containing non-target data).
Table 1. G-mean values for different methods for UCI datasets with noise (not containing non-target data).
Dataset r SVDDDW-SVDDR-SVDDGL-SVDD ϕ R -SVDD ϕ f -SVDD ϕ L -SVDD
Blood 10%49.9549.2851.2549.1952.7952.7954.82
20%46.8746.4348.6546.8450.2150.2151.5
30%42.3441.8742.3442.3445.9645.9652.41
40%41.3941.5241.3941.5845.0645.0651.50
50%40.5340.5640.6940.942.642.650.72
Balance scale 10%73.7073.873.9173.6474.3874.3874.32
20%68.5868.6768.6068.6066.6266.6267.19
30%62.6664.2662.6662.7162.7062.7063.75
40%57.3858.4457.3857.4458.6058.6064.32
50%53.4754.7253.4353.4756.5156.5161.13
Ecoli10%77.577.4880.6376.2886.0886.0881.95
20%70.8567.0270.2670.8573.0873.0873.53
30%6762.7767.226770.6170.6170.29
40%64.7560.3764.7564.7567.5767.5768.33
50%59.9455.1859.8759.9464.2664.2664.42
Haberman10%54.3754.5753.3655.4954.0654.0652.09
20%54.8555.0154.8754.9355.3455.3454.30
30%53.3853.5854.8653.5457.1357.1356.02
40%52.5452.3152.7352.4553.8953.8953.93
50%50.1249.7449.9550.0951.9251.9252.03
Iris10%76.879.576.9278.1880.7480.7479.52
20%69.2968.9469.3969.5271.1371.1370.97
30%65.7765.5265.6765.8570.4670.4669.46
40%61.6761.6961.4161.9665.4665.4665.84
50%60.2159.9560.3260.4867.7267.7268.02
Wine10%79.7679.7679.0279.7680.1880.1880.74
20%71.6673.5973.0272.3275.0375.0376.23
30%68.7370.1171.4969.0473.1273.1272.44
40%64.364.2564.2164.0168.3568.3568.96
50%64.0363.9364.0663.0968.0068.0068.59
Ionosphere10%84.2284.4385.0884.2286.8986.8986.81
20%82.9883.382.8983.2284.1284.1283.38
30%81.9482.281.3581.8982.4282.4282.18
40%83.5583.3780.5883.3383.1483.1482.86
50%82.2882.4477.8782.2881.9681.9682.06
Sonar10%56.4656.8956.1657.2258.6258.6259.58
20%53.4651.4853.4653.4653.5053.5053.73
30%49.1547.1749.1549.1555.2955.2955.28
40%52.4850.0452.4852.4857.5757.5757.68
50%45.2243.3145.2245.2250.1050.1050.38
Table 2. F1-score values of different methods for UCI datasets with noise (not containing non-target data).
Table 2. F1-score values of different methods for UCI datasets with noise (not containing non-target data).
Dataset r SVDDDW-SVDDR-SVDDGL-SVDD ϕ R -SVDD ϕ f -SVDD ϕ L -SVDD
Blood10%72.1773.1373.1873.2173.0573.0572.93
20%68.9868.0969.1268.8568.7268.7268.19
30%65.9366.5467.3365.9368.4268.4268.44
40%64.4264.9564.2664.2866.2566.2565.95
50%63.3862.3763.2762.6465.2565.2564.06
Balance scale 10%58.1458.2858.1358.0756.6756.4857.01
20%50.5751.4750.1951.4950.7450.7451.14
30%48.6148.9248.6348.6549.7549.7549.58
40%48.1848.8248.1848.2153.0253.0253.65
50%49.0749.2149.1649.2149.8149.8153.48
Ecoli10%60.5560.4860.5155.2772.5263.2564.12
20%47.4350.8146.4950.8154.7154.6056.57
30%49.3047.0443.5149.3053.4653.2653.27
40%50.9549.4346.0850.9553.0253.0253.65
50%50.4950.5849.2050.5851.9351.9352.11
Haberman10%57.9858.2856.0162.4257.6358.1557.79
20%60.9760.1759.6760.2960.6560.2759.76
30%62.1861.9562.3662.2364.1063.7264.22
40%63.8764.4162.0464.0264.6264.4563.87
50%56.6658.0257.5158.2560.5260.6159.11
Iris10%66.0757.6268.7556.5260.0260.4162.75
20%51.1751.7153.9951.7955.6655.6655.79
30%41.1041.0242.1541.1749.7449.7451.22
40%40.0440.1341.9740.4144.9444.9445.79
50%43.3943.7044.2843.9949.9749.9750.34
Wine10%60.8763.8762.7753.6666.2666.7667.28
20%44.9743.4841.9743.9455.3250.3451.50
30%42.4643.9143.3342.8148.6747.5248.27
40%42.0642.4542.5641.9445.6045.2949.26
50%46.6446.6146.2445.9848.9648.9449.59
Ionosphere10%80.8781.0981.2980.8783.0882.8983.08
20%80.4080.0380.7480.3181.0080.3680.28
30%79.4679.7679.9879.4081.2180.6080.52
40%81.5281.9381.2681.8882.0082.1282.02
50%81.8581.6781.4881.6782.0881.9981.97
Sonar10%55.2354.355.254.654.1754.1755.14
20%51.4352.2552.2552.3553.0352.7352.13
30%48.7347.7848.7348.7351.5251.5251.47
40%51.1650.3451.1651.1652.2952.2952.30
50%48.2649.0849.0849.0851.7851.7851.67
Table 3. G-mean values of different methods on UCI datasets with noise (containing non-target data).
Table 3. G-mean values of different methods on UCI datasets with noise (containing non-target data).
Dataset r SVDDDW-SVDDR-SVDDGL-SVDD ϕ R -SVDD ϕ f -SVDD ϕ L -SVDD
Blood 10%53.1851.6652.0553.6652.2952.2954.45
20%50.8250.9151.6551.5752.0252.0253.39
30%49.7249.1951.1348.7853.4353.4352.13
40%51.0049.2350.1648.4351.2151.2153.87
50%48.0448.7247.8946.8852.8752.8752.42
Balance scale 10%78.8680.9179.780.2481.2581.2577.64
20%73.5375.2574.1873.4176.0776.0773.65
30%65.7867.3866.265.5168.0968.0966.48
40%60.7462.3062.1060.5667.1167.1165.78
50%56.7961.4858.4660.0868.2468.2464.86
Ecoli10%82.182.3683.9583.0885.3485.3482.1
20%77.4677.4880.0277.0882.5382.5377.82
30%69.5567.7468.5270.8871.9671.9671.32
40%74.0970.0771.1972.0874.4574.4574.35
50%66.9363.3864.8763.5868.1868.1867.36
Haberman10%55.5955.4154.8155.5156.7256.7257.09
20%58.6658.8854.1258.8061.9961.9959.75
30%55.1355.2254.255.4756.1456.1454.65
40%51.8352.0057.6350.3856.0156.0154.81
50%55.6455.7055.2655.5258.2758.2757.60
Iris10%80.781.4581.2280.3879.1379.1380.18
20%79.0178.3178.7577.9477.3777.3778.63
30%66.7764.4863.5866.6371.1271.1270.78
40%67.0860.6163.1664.2970.9170.9170.47
50%62.1953.7461.0959.3967.5967.5967.63
Wine10%81.5484.1183.7781.5484.3584.3582.45
20%79.1281.4480.2479.1282.8782.8780.80
30%71.0369.1570.0271.0375.4875.4871.95
40%69.2867.9767.7569.2872.2972.2970.47
50%64.7766.8066.4664.7769.0469.0466.50
Ionosphere10%86.4586.7988.6687.2689.2889.2889.02
20%84.8486.0186.3886.3888.4988.4988.28
30%84.2585.085.0985.1887.9387.9387.87
40%82.5883.1383.3783.3786.6086.6086.20
50%77.2678.5678.5478.6284.2984.2983.95
Sonar10%57.2458.3458.9758.9758.6658.6658.70
20%59.2359.2560.3560.3560.2160.2159.79
30%54.5154.7656.0356.0356.5056.5056.33
40%55.8655.8658.9358.9358.5658.5660.09
50%52.2153.1453.9253.9255.4555.4554.71
Table 4. F1-score values of different methods for UCI datasets with noise (containing non-target data).
Table 4. F1-score values of different methods for UCI datasets with noise (containing non-target data).
Dataset r SVDDDW-SVDDR-SVDDGL-SVDD ϕ R -SVDD ϕ f -SVDD ϕ L - SVDD
Blood 10%76.3076.0576.3877.2077.1475.8169.71
20%65.9065.4666.1266.2273.7170.7965.62
30%65.5667.4365.4069.1569.3266.8168.44
40%63.9465.8760.4865.8270.0966.0766.46
50%65.9865.1063.1668.4480.5768.8869.03
Balance scale 10%65.7768.6666.8967.5269.1866.7566.74
20%63.4862.2362.3461.6464.0961.9063.58
30%58.6258.5456.9961.2361.6261.6259.06
40%64.7463.6763.6763.1666.5766.1466.05
50%68.2568.7866.9268.0271.2371.2369.84
Ecoli10%67.5868.4668.2968.5176.3768.8468.58
20%62.7663.3663.0362.1770.0169.0168.16
30%58.6258.5456.9960.2561.3961.7561.11
40%65.0165.4963.8166.1767.8967.8667.67
50%65.1165.8365.9266.0266.8666.1767.92
Haberman10%61.2261.0759.2461.1258.0460.4358.08
20%66.3666.2858.2766.1963.8263.8259.85
30%61.8459.1459.5361.5364.9765.4864.48
40%57.6058.2957.4657.4563.0864.1762.35
50%62.9062.4555.2863.0866.3470.6363.96
Iris10%61.6662.8463.2661.1361.9163.7860.98
20%55.1455.3956.2353.8558.2458.2459.73
30%52.6850.7453.2152.6858.9153.9153.75
40%54.7451.5653.8855.9857.6457.6457.42
50%54.1957.4358.4258.4760.5258.7958.64
Wine10%65.0370.6270.3365.0370.9068.8267.48
20%64.0259.1264.6959.1262.3759.2959.35
30%52.6850.7452.2152.6858.9153.9153.75
40%55.5653.8653.8255.5658.7856.9056.17
50%58.6757.4757.2457.4760.5259.7958.94
Ionosphere10%85.2185.5587.2286.0287.9587.9587.64
20%85.3684.2085.4085.5887.4887.4887.22
30%86.4386.8086.7986.9188.4588.4588.38
40%87.3987.4687.1987.5888.9588.888.67
50%87.9487.6387.9087.9389.2489.0788.92
Sonar10%58.5458.7558.7158.7159.0558.8859.12
20%57.4658.2158.0558.0558.7458.7458.18
30%55.1252.2552.6652.6660.2659.5256.25
40%62.1558.3060.2360.2363.5263.5262.38
50%61.3564.9160.1060.1062.8062.8061.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Lyu, Y.; Shi, J.; Zhang, W. Robust Support Vector Data Description with Truncated Loss Function for Outliers Depression. Entropy 2024, 26, 628. https://doi.org/10.3390/e26080628

AMA Style

Chen H, Lyu Y, Shi J, Zhang W. Robust Support Vector Data Description with Truncated Loss Function for Outliers Depression. Entropy. 2024; 26(8):628. https://doi.org/10.3390/e26080628

Chicago/Turabian Style

Chen, Huakun, Yongxi Lyu, Jingping Shi, and Weiguo Zhang. 2024. "Robust Support Vector Data Description with Truncated Loss Function for Outliers Depression" Entropy 26, no. 8: 628. https://doi.org/10.3390/e26080628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop