Next Article in Journal
Photoacoustic Imaging as a Tool for Assessing Hair Follicular Organization
Next Article in Special Issue
An IoE and Big Multimedia Data Approach for Urban Transport System Resilience Management in Smart Cities
Previous Article in Journal
Tripping Avoidance Lower Extremity Exoskeleton Based on Virtual Potential Field for Elderly People
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Error-Robust Distributed Denial of Service Attack Detection Based on an Average Common Feature Extraction Technique

by
João Paulo Abreu Maranhão
1,*,
João Paulo Carvalho Lustosa da Costa
1,2,
Edison Pignaton de Freitas
3,
Elnaz Javidi
4 and
Rafael Timóteo de Sousa Júnior
1
1
Department of Electrical Engineering, University of Brasília, Brasília 70910-900, Brazil
2
Department 2-Campus Lippstadt, Hamm-Lippstadt University of Applied Sciences, 59063 Hamm, Germany
3
Informatics Institute, Federal University of Rio Grande do Sul, Porto Alegre 91509-900, Brazil
4
Department of Mechanical Engineering, University of Brasília, Brasília 70910-900, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(20), 5845; https://doi.org/10.3390/s20205845
Submission received: 27 July 2020 / Revised: 10 September 2020 / Accepted: 18 September 2020 / Published: 16 October 2020
(This article belongs to the Special Issue Smart Cities of the Future: A Cyber Physical System Perspective)

Abstract

:
In recent years, advanced threats against Cyber–Physical Systems (CPSs), such as Distributed Denial of Service (DDoS) attacks, are increasing. Furthermore, traditional machine learning-based intrusion detection systems (IDSs) often fail to efficiently detect such attacks when corrupted datasets are used for IDS training. To face these challenges, this paper proposes a novel error-robust multidimensional technique for DDoS attack detection. By applying the well-known Higher Order Singular Value Decomposition (HOSVD), initially, the average value of the common features among instances is filtered out from the dataset. Next, the filtered data are forwarded to machine learning classification algorithms in which traffic information is classified as a legitimate or a DDoS attack. In terms of results, the proposed scheme outperforms traditional low-rank approximation techniques, presenting an accuracy of 98.94 % , detection rate of 97.70 % and false alarm rate of 4.35 % for a dataset corruption level of 30 % with a random forest algorithm applied for classification. In addition, for error-free conditions, it is found that the proposed approach outperforms other related works, showing accuracy, detection rate and false alarm rate of 99.87 % , 99.86 % and 0.16 % , respectively, for the gradient boosting classifier.

1. Introduction

Cyber–Physical Systems (CPSs) consist of a set of networked components including sensors, control processing units and communication devices applied to the monitoring and management of physical infrastructures [1]. CPSs are typically used for safety-critical applications, such as in avionics, instrumentation, defense systems and critical infrastructure control, for instance, electric power, water resources and communications systems [2]. Consequently, potential cyber and physical attacks can lead to information leakage, extensive economic damage and critical infrastructure destruction [3].
A CPS architecture is typically composed of five layers, namely, physical layer, sensor/actuator layer, network layer, control layer, and information layer. The physical layer consists of the physical objects or processes monitored by CPSs. In addition, the sensor/actuator layer is composed of sensors, which measure data obtained from the physical layer, and by actuators, which execute specific actions under the control of the above layers. For example, in the air traffic control, sensors receive measurement data collected from a sensor array-based localization system, whereas actuators are used to neutralize unmanned aerial vehicles detected within the controlled airspace [4]. Additionally, the network layer is responsible for network sensors and actuators, as well as connecting the sensor/actuator and the control layers through communication devices and protocols. Furthermore, the control layer, through intelligent electronic devices, programmable logic controllers and remote terminal units, is responsible for the locally distributed control action level. Such a layer forwards the measurement data to human operators in the information layer, which monitor the system and take actions whenever required [1].
In this sense, it is crucial to develop highly reliable intrusion detection systems for CPSs such that safety-critical applications can be controlled and protected in an efficient way. Currently, intrusion detection schemes are highly sophisticated, involving advanced signal processing techniques [5], as well as machine learning (ML)-based solutions [6]. The scope of this paper is the security of the CPS against Distributed Denial of Service (DDoS) attacks, which are one of the major security threats in existence today. DDoS attacks are launched by thousands of compromised machines, called “zombies”, which together establish a “zombie” network. Such zombies perform massive attacks against a victim, depleting its bandwidth and network resources. Common DDoS detection models include the traffic entropy model and history-based Internet Protocol (IP) filtering. However, with the development of cloud computing, Internet of Things (IoT) and artificial intelligence techniques, such traditional network intrusion detection solutions cannot face modern DDoS attack strategies, which are harder to detect and prevent [7].
In order to obtain higher performance, ML-based intrusion detection systems (IDSs) must be trained with massive amounts of data. Usually, large datasets have inherent multidimensional structure, which can be better explored by applying tensor signal processing techniques. However, a potential drawback consists of the presence of errors in such large datasets. In this case, such errors can refer to uncalibrated measures that occurred during the process of dataset creation [8], or due to false data injection performed by attackers on publicly available datasets [9], leading to data corruption. Such a fact can degrade the performance of the ML classifier and, consequently, reduce the reliability of the DDoS attack detection model.
To face the above-mentioned issues, we propose an error-robust tensor-based technique for DDoS attack detection. First, we filter out, from the dataset, the average value of the common features among instances such that the machine learning classification algorithms can benefit from the more discriminative individual information at each instance during the training phase. In this paper, decision tree (DT), random forest (RF) and gradient boosting (GB) classifiers are applied for performance evaluation, whereas the CICDDoS2019 and CICIDS2017 datasets are considered in numerical simulations. According to the results in Section 6, the proposed scheme outperforms the well-known Higher-Order Orthogonal Iteration (HOOI) and Higher-Order Singular Value Decomposition (HOSVD) techniques in terms of accuracy, detection rate, false alarm rate, area under the precision–recall curve and Matthews correlation coefficient.
The main research contributions of this paper are summarized as follows:
  • The proposal of a novel technique in which the average value of the common features among instances is filtered out from the dataset by applying the HOSVD low-rank approximation scheme, improving the performance of the intrusion detection system.
  • The comparison with different state-of-the-art low-rank approximation techniques in order to show the higher performance and error-robustness of the proposed approach.
The remainder of this paper is organized as follows. Section 2 presents the related works. Section 3 introduces the data model. In Section 4, the theoretical background is introduced. Section 5 shows the proposed tensor based scheme for DDoS attack detection in CPSs. In Section 6, simulation results are presented and discussed. Section 7 draws the conclusions.

2. Related Works

In this section, the related works are presented and discussed. Since the proposed scheme is based on multidimensional signal processing techniques applied on DDoS attack detection, we discuss papers related to multilinear algebra and distributed denial of service detection systems. In [10,11], the authors presented multidimensional solutions for image classification. However, whereas the former proposed common and individual feature extraction techniques based on LL1 decomposition, the latter applied the HOSVD algorithm for classifying corrupted images. In addition, Lathauwer et al. [12] proposed the classical HOOI low-rank approximation technique, widely applied for tensor denoising. In [5], the authors proposed a signal processing-based approach in which model order selection and eigen similarity analysis are applied for detecting and identifying the time instants and ports exploited by attackers. Finally, specifically regarding DDoS attack detection, three researches can be cited. Hosseini and Azizi [13] proposed a hybrid framework based on a data stream approach for DDoS attack detection where the computational load is divided between the client and proxy side. Next, Lima Filho et al. [14] proposed a random forest-based DDoS detection system in which several volumetric attacks, such as Transmission Control Protocol (TCP) flood, User Datagram Protocol (UDP) flood, and Hyper Text Transfer Protocol (HTTP) flood, are early identified. Finally, Wang et al. [6] proposed a method for detecting DDoS attacks in which the optimal features are obtained by combining feature selection and multilayer perceptron (MLP) classification algorithm. Further, when considerable detection errors are dynamically perceived, a feedback mechanism reconstructs the IDS.
In Table 1, we summarize the general aspects of the above mentioned related works, highlighting their aims, proposed solutions, pros and cons.

3. Data Model

This section presents the data model adopted in this paper and is divided into two subsections. First, Section 3.1 shows the mathematical notation used throughout this paper. Next, a brief description of the data modeling is presented in Section 3.2.

3.1. Mathematical Notation

In this subsection, we present the mathematical notation used throughout this paper. Italic letters ( a , b , c , A , B , C ) represent scalars, lowercase bold letters represent column vectors ( a , b , c ) and uppercase bold letters represent matrices ( A , B , C ) . Higher order tensors are denoted by uppercase bold calligraphic letters ( A , B , C ) . The concatenation of the tensors A and B along the r-th dimension is defined as [ A | B ] r . Transposition and Hermiticity of a matrix are represented by the superscripts { · } T and { · } H , respectively. The operator diag ( · ) transforms its argument vector into the main diagonal of a diagonal matrix. The Hadamard product is represented by operator ⊙.
Furthermore, the r-th mode unfolding of the tensor X is denoted as [ X ] ( r ) , which is obtained by varying the r-th index along the rows and stacking all other indices along its columns. Additionally, Y = X × r B denotes the r-mode product between the tensor X and the matrix B . In a matricized fashion, such a product can be expressed as [ Y ] ( r ) = B [ X ] ( r ) .

3.2. Data Modeling

In this paper, the dataset matrix X R M × N is modeled in the following fashion:
X = X 0 + N
where X 0 R M × N is the error-free dataset matrix, N R M × N is the error matrix, M is the number of instances and N is the number of features. The matrix N represents generalized perturbations added to X 0 , for instance, false data injection attacks, which are commonly used to fool machine learning classifiers. The m-th instance and the n-th feature are, respectively, given by X m , : for m = 1 , , M and X : , n for n = 1 , , N . The class label vector is denoted by y = [ y 1 , , y M ] T R M , where y m indicates if the m-th instance X m , : for m = 1 , , M is legitimate traffic or DDoS attack.
Furthermore, we can rewrite the dataset matrix X in (1) in a tensor form. Initially, each instance X m , : R N for m = 1 , , M is reshaped as a tensor with dimensions N 1 × × N R , such that N = r = 1 R N r . Then, the M tensors are stacked along the ( R + 1 ) -th dimension, generating the dataset X R N 1 × × N R × M denoted as:
X = X 0 + N
where X 0 R N 1 × × N R × M is the error-free dataset tensor and N R N 1 × × N R × M is the error tensor. The r-th mode unfolding matrix of X is given by [ X ] ( r ) R N r × j r N j × M . Note that the dataset matrix X R M × N in (1) corresponds to the ( R + 1 ) -th unfolding matrix [ X ] ( R + 1 ) R M × r = 1 R N r .

4. Theoretical Background

This section presents the theoretical background and is divided into two subsections. First, Section 4.1 introduces the taxonomy of DDoS attacks. Next, Section 4.2 details the DDoS attack datasets adopted in this paper.

4.1. Taxonomy of DDoS Attacks

Distributed Denial of Service attacks are one of the most important security threats nowadays. In a DDoS attack, a large volume of traffic is sent through the network, exhausting the network resources, as well as the overall bandwidth and individual node resources [15]. Consequently, the victim is forced to slow down, crash or shut down due to multiple connection requests during a period of time [16].
Since networks and servers became more robust in identifying network layer DDoS attacks, hackers responded by moving up the OSI model stack to higher layers [17]. For instance, several DDoS attacks exploit vulnerabilities present in the application layer, reproducing the behavior of legitimate customers and, consequently, are not detected by most of the conventional IDSs [18]. In this sense, currently, several researches in the literature broadly classify DDoS attacks into three types: application-layer attacks, resource exhaustion attacks, and volumetric attacks [19], which are described as follows:
  • Application-Layer Attack: in this type of attack, vulnerabilities present in the application are used by an attacker, making it inaccessible by legitimate users [19]. Instead of depleting the network bandwidth, the server resources, such as CPU, database, socket connections or memory, are exhausted by application-layer DDoS attacks. In addition, such attacks present some subtleties which make them harder to detect and mitigate: they are performed through legitimate HTTP packets, with a low traffic volume, presenting high resemblance to flash crowds [17]. HTTP and Domain Name System (DNS)-based DDoS attacks are examples of application-layer attacks.
  • Resource Exhaustion Attack: In this category, hardware resources of servers, such as memory, CPU, and storage, are depleted. Consequently, they become unavailable for legitimate accesses. Resource exhaustion attacks are also known as protocol-based attacks, since vulnerabilities in protocols are exploited. For example, in an SYN flood attack, a hacker exploits the TCP three-way handshake process. After receiving a high volume of SYN packets, the targeted server responds with SYN/ACK packets and leaves open ports to receive the final ACK packets, which never arrive. This process continues until all ports of the server are unavailable.
  • Volumetric Attack: In this type of attack, the bandwidth of the target system is exhausted by a massive amount of traffic. Since such attacks are launched by using amplification and reflection techniques, they are considered as the simplest DDoS attacks to be employed [18]. UDP flood and Internet Control Message Protocol (ICMP) flood can be cited as volumetric attacks.

4.2. CICDDoS2019 and CICIDS2017 Datasets

In this paper, we consider two datasets provided by the Canadian Institute of Cybersecurity (CIC) for network intrusion detection models, namely, CICDDoS2019 [20] and CICDIS2017 [21]. CICDDoS2019 is a novel benchmark dataset composed by several network traffic features, with millions of labeled legitimate and DDoS attack instances [22]. The dataset was generated in two distinct days. In 12 January 2019, the training set was captured, containing 12 different types of DDoS attacks, namely, DNS, WebDDoS, LDAP, MSSQL, NetBIOS, NTP, SNMP, SSDP, UDP, SYN, TFTP and UDP-Lag based attacks. Next, in 11 March 2019, the testing set was generated, with seven DDoS attack types, including LDAP, MSSQL, NetBIOS, UDP, SYN and UDP-Lag based attacks, plus Port Scan. All DDoS attacks were separated in different PCAP files, according to their types.
Similarly to CICDDoS2019, CICIDS2017 is a completely labeled dataset that contains legitimate traffic and the most up-to-date common network attacks. The dataset was generated in five days, from Monday, 3 July 2017, to 7 July 2017, and is publicly available in PCAP and CSV files. On Monday, only legitimate traffic was captured, whereas different types of network attacks were captured in the following days. The malicious activities include common updated attacks, for example, DDoS, Denial of Service (DoS), Brute Force, Cross-Site Scripting (XSS), SQL Injection, Infiltration, Port Scan and Botnet [23]. Particularly, DDoS attacks were generated on 7 July 2017. Since we focus on DDoS attack detection, only legitimate and DDoS attack instances present in the traces of 3 July 2017, and 7 July 2017, respectively, are used in this research.

5. Proposed Average Common Feature Extraction Technique for DDoS Attack Detection in Cyber–Physical Systems

This section presents the proposed average common feature extraction scheme for DDoS attack detection in CPSs. First, we introduce the concept of common and individual features of a given dataset. Such concept is well-known in image classification problems, in which data share some common variables while exhibiting their own features simultaneously [24]. Let us assume a tensor Y R I 1 × I 2 × S composed of the slices Y : , : , s for s = 1 , , S . Each frontal slice Y : , : , s is equivalent to a combination of the three base colors, namely, green, red and blue, represented by the matrices B G R I 1 × I 2 , B R R I 1 × I 2 and B B R I 1 × I 2 . Usually, the base colors are obtained through tensor decompositions, such as the LL1 decomposition with non-negativity constraint, such that Y = ( B G × 3 c G ) + ( B R × 3 c R ) + ( B B × 3 c B ) , where c G R S , c R R S and c B R S contain the intensity values of the red, green and blue colors, respectively [10]. Note that Y presents rank three, which corresponds to the number of base colors. Alternatively, the base colors can be stacked along the 3rd dimension, generating B R I 1 × I 2 × 3 , whereas the vectors c G , c R and c B can be grouped into the matrix C R S × 3 . The tensor B , known as the common feature tensor, can also be represented as Y ˜ , as a reference to the original dataset Y .
After extracting the common features, only the more discriminative individual information at each instance is used during the training phase, which improves the performance of the machine learning classifier [10]. In this sense, due to the considerable results for image classification, the concept of common and individual feature extraction shows an outstanding potential for detecting network intrusion by using large datasets in ML classifier training. Hence, a similar procedure is adopted in this paper, such that the average value of the common features among dataset instances is filtered out from the data. As a consequence, the ML classifier takes advantage of the benefits from the resulting filtered dataset. In order to improve the readability, the mathematical symbols used throughout this section are summarized in Table 2.
Before applying the feature extraction technique on the dataset tensor, three steps are necessary, namely, dataset splitting, dataset pre-processing and multilinear rank estimation, which are described as follows.
  • Dataset Splitting: First, the DDoS attack dataset X R N 1 × × N R × M is split into the training and testing tensors X tr R N 1 × × N R × M tr and X te R N 1 × × N R × M te , where M tr and M te are the number of training and testing instances, respectively, with M = M tr + M te .
  • Dataset Pre-Processing: The training and testing datasets, X tr and X te , are submitted to a preprocessing step, which includes data cleansing, feature scaling and label encoding. Initially, several rows containing missing values (NaN) and infinity values (Inf) are removed from the dataset. Next, all features are normalized to the range [ 0 1 ] such that features with a higher order of magnitude do not dominate lower variables. Then, since we are dealing with binary classification, legitimate and DDoS attack instances are labeled as 0 and 1, respectively.
  • Multilinear Rank Estimation: Finally, we estimate the multilinear ranks ( d 1 tr , , d R + 1 tr ) and ( d 1 te , , d R + 1 te ) corresponding to the tensors X tr and X te , respectively. The parameters d r tr and d r te for r = 1 , , R + 1 are estimated by using multidimensional model order selection (MOS) schemes, such as the R-D Minimum Description Length [25].
After the above-mentioned steps, X tr is forwarded to the proposed average common feature extraction technique for DDoS attack detection, such that the training phase is initialized. Next, when the training process is finished, X te is sent to the trained IDS for classification. For simplicity, from this point on, X R N 1 × × N R × M can refer to the training or testing dataset tensors. The steps of the proposed scheme, shown in Figure 1, are discussed as follows.
  • Step 1: Computing the HOSVD of X .
    In Step 1 of Figure 1, we compute the Higher-Order Singular Value Decomposition (HOSVD) of the dataset tensor X R N 1 × × N R × M . Here, we intend to obtain the core tensor, G R d 1 × × d R + 1 , as well as the first R factor matrices, A r R N r × d r for r = 1 , , R , where ( d 1 , , d R + 1 ) is the multilinear rank of X . Such tensors are used in Step 2 to compute the common feature tensor, X ˜ R N 1 × N R × d R + 1 .
    The HOSVD of X is given by:
    X = G × 1 A 1 × R A R × R + 1 A R + 1
    Usually, the number of common features among the dataset instances is obtained empirically. However, a considerable performance is achieved by considering d R + 1 as an estimate of the number of common features, as shown in the simulations of Section 6. We refer here to [25] to estimate the number of common features.
  • Step 2: Computing the common feature tensor, X ˜ .
    In Step 2 of Figure 1, we compute X ˜ R N 1 × N R × d R + 1 , which contains the common features among the dataset instances X : , , m R N 1 × × N R for m = 1 , , M . The tensor X ˜ is defined as the r-mode product between the core tensor G and the first R factor matrices [26],
    X ˜ = G × 1 A 1 × R A R
  • Step 3: Computing the average common feature tensor, X ¯ .
    Next, in Step 3 of Figure 1, we compute X ¯ R N 1 × × N R , which corresponds to X ˜ averaged along the ( R + 1 ) -th dimension, i.e.,
    X ¯ = 1 d R + 1 d = 1 d R + 1 X ˜ : , , d
  • Step 4: Obtaining the ( R + 1 ) -th mode unfolding matrix, [ X ] ( R + 1 ) .
    Following, in Step 4 of Figure 1, we obtain the ( R + 1 ) -th mode unfolding matrix of X , given by [ X ] ( R + 1 ) R M × N . In general, the r-th unfolding matrix [ X ] r is obtained after each element ( x 1 , , x R + 1 ) in X is mapped to the element ( x r , j ) in [ X ] r as follows:
    j = 1 + k = 1 k r R + 1 ( x k 1 ) J k , with J k = m = 1 m r k 1 N m
    Such a matrix is used in Step 5 in order to compute the weights to be applied on X ¯ for dataset filtering.
  • Step 5: Computing the weight tensor, C .
    In Step 5 of Figure 1, we compute the weight tensor C R N 1 × × N R × M , which is used for dataset filtering in Step 7. First, the covariance matrix R xx R N × N of the ( R + 1 ) -th mode unfolding matrix [ X ] ( R + 1 ) R M × N , as well as its eigenvalue decomposition, are obtained as follows:
    R xx = 1 M [ X ] ( R + 1 ) H [ X ] ( R + 1 )
    R xx = E Λ E H
    where E R N × N is the eigenvector matrix of R xx and Λ R N × N contains the eigenvalues λ 1 , , λ N of R xx in its diagonal. Such eigenvalues are sorted in descending order so that λ 1 is the largest one.
    Before subtracting the average common features from X , we have to multiply each one of the elements of X ¯ by a positive number smaller than 1. This can be done by computing the Hadamard product between X ¯ and a weight tensor C R N 1 × × N R × M . The tensor C can be obtained empirically or by some adaptive technique such that the errors between the expected and predicted classifications during the training phase of a ML classifier are minimized. In this paper, we adopt the following empirical approximation: all elements of C are equal to the average eigenvalue λ ¯ of R xx , i.e.,
    λ ¯ = n = 1 N λ n
    where λ n for n = 1 , , N are the eigenvalues of R xx .
  • Step 6: Obtain the concatenated tensors, C C and X ¯ C .
    In Step 6 of Figure 1, M copies of C are concatenated along the ( R + 1 ) -th dimension, generating the tensor C C R N 1 × × N R × M . The same procedure is adopted for X ¯ in order to obtain X ¯ C R N 1 × × N R × M . Both computations can be expressed as L:
    C C = [ C | | C ] R + 1
    X ¯ C = [ X ¯ | | X ¯ ] R + 1
    By doing this, we can compute the Hadamard product between C C and X ¯ C in Step 7, and then subtract the result from X in Step 8, in a direct way.
  • Step 7: Applying the weights C C on the tensor X ¯ C .
    Next, in Step 7 of Figure 1, we compute the Hadamard product between C C and X ¯ C such that the weights computed in Step 5 are applied to each element of the average common feature tensor, i.e.,
    W = C C X ¯ C
  • Step 8: Computing the filtered dataset tensor, X [ f ] .
    Then, in Step 8 of Figure 1, the filtered dataset tensor X [ f ] R N 1 × × N R × M can be computed as follows:
    X [ f ] = X W
  • Step 9: Obtaining the ( R + 1 ) -th mode unfolding matrix, [ X ] ( R + 1 ) [ f ] .
    Finally, in Step 9 of Figure 1, we obtain the ( R + 1 ) -th mode unfolding matrix of X [ f ] , given by [ X ] ( R + 1 ) [ f ] R M × N . Similarly to Equation (6), each element ( x r [ f ] , j ) of the r-th unfolding matrix [ X ] r [ f ] is computed as follows:
    j = 1 + k = 1 k r R + 1 ( x k [ f ] 1 ) J k , with J k = m = 1 m r k 1 N m
    Such a matrix is forwarded to the ML classification algorithm for classification tasks, where the predicted class label vector y ^ R M is computed. Since decision tree, random forest and gradient boosting algorithms present considerable results in network intrusion detection problems, they are adopted in this paper for classifying the network traffic data [14].
The proposed average common feature extraction technique for DDoS attack detection in CPSs is summarized in Algorithm 1.
Algorithm 1: Proposed average common feature extraction technique for DDoS attack detection.
Input:
- Dataset tensor X R N 1 × × N R × M
- Multilinear rank ( d 1 , , d R + 1 )
Output:
- Filtered dataset matrix [ X ] ( R + 1 ) [ f ] R M × N
Algorithm Steps:
1 Compute the HOSVD of X R N 1 × × N R × M , with multilinear rank ( d 1 , , d R + 1 ) , as in (3)
2 Compute the common feature tensor X ˜ R N 1 × × N R × d R + 1 as in (4)
3 Compute the average common feature tensor X ¯ R N 1 × × N R as in (5)
4 Convert X into the ( R + 1 ) -th mode unfolding matrix [ X ] ( R + 1 ) R M × N as in (6)
5 Obtain the weight tensor C R N 1 × × N R , whose elements are computed as in (7) to (9)
6 Obtain the concatenated tensors C C R N 1 × × N R × M and X ¯ C R N 1 × × N R × M as in (10) and (11)
7 Compute the Hadamard product between C C and X ¯ C as in (12)
8 Compute the filtered dataset tensor X [ f ] R N 1 × × N R × M as in (13)
9 Convert X [ f ] into the ( R + 1 ) -th mode unfolding matrix [ X ] ( R + 1 ) [ f ] R M × N as in (14)

6. Simulation Results

This section presents the simulation results and is divided into four subsections. Section 6.1 and Section 6.2 introduce and discuss the results obtained from numerical simulations, respectively. Next, the comparison between the proposed technique and related works is shown in Section 6.3. Finally, Section 6.4 presents the computational complexity of the compared schemes.

6.1. Results

In this paper, we adopt Accuracy, Detection Rate, False Alarm Rate, Area Under the Precision–Recall Curve and Matthews Correlation Coefficient as performance evaluation metrics. Furthermore, the Relative Loss of Accuracy is adopted as error-robustness evaluation metric. Such metrics are based on the values of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). TP and TN represent the correctly predicted values, whereas FP and FN correspond to the misclassified events. These metrics are defined as follows:
  • Accuracy (Acc): the ratio between the correctly predicted instances and the total number of instances,
    Acc = TP + TN TP + TN + FP + FN
  • Detection Rate (DR): the ratio between the correctly predicted positive instances and the total number of actual positive instances,
    DR = TP TP + FN
  • False Alarm Rate (FAR): the ratio between the number of negative instances wrongly classified as positives and the total number of actual negative instances,
    FAR = FP TN + FP
  • Area Under the Precision–Recall Curve (AUPRC): reflects a trade-off between the precision and recall. Precision is the ability of a classifier not to label as positive a sample that is negative, defined as Prec = TP / ( TP + FP ) . On the other hand, recall corresponds to the ability of a classifier to find all positive samples, given by Rec = TP / ( TP + FN ) . The AUPRC corresponds to the area under the curve obtained by plotting the precision and recall on the y and x axes, respectively, for different probability thresholds. By applying the trapezoidal rule, the AUPRC can be defined as:
    AUPRC = 1 2 k = 2 K ( Prec k + Prec k 1 ) · ( Rec k Rec k 1 )
    where Prec k and Rec k are the precision and recall values for the k-th threshold, and K is the total number of probability thresholds.
    Matthews Correlation Coefficient (MCC): measures the quality of binary classifications. It ranges from 1 to + 1 such that higher values represent better performance. The MCC is defined as:
    MCC = ( TP · TN ) ( FP · FN ) ( TP + FP ) · ( TP + FN ) · ( TN + FP ) · ( TN + FN )
  • Relative Loss of Accuracy (RLA): measures the percentage of variation of the accuracy of the classifiers at the error level EL % , Acc EL % , with respect to the original case with no additional error, Acc 0 % ,
    RLA = Acc 0 % Acc EL % Acc 0 %
All experiments were executed on a desktop computer with processor Intel Core i7-2600 3.40 GHz and 16 GB of RAM. Data pre-processing and machine learning classifier algorithms were implemented in the Python library Scikit-Learn, whereas Python libraries Tensorly [27] and HOTTBOX [26] were used to implement tensor computations. Furthermore, the proposed approach is validated by considering subsets of the CICDDoS2019 and CICIDS2017 datasets, described in Section 4.2. A total of M = 40,000 instances were extracted from each dataset, of which 20 % correspond to DDoS attacks, as detailed in Table 3.
CICDDoS2019 is a novel dataset that contains an extensive variety of DDoS attacks and fills the gaps of the current datasets [28]. In this sense, it is used for performance evaluation throughout this subsection. The proposed scheme is compared with state-of-the-art low-rank approximation techniques, namely, the Higher-Order Orthogonal Iteration (HOOI) [12] and Higher-Order Singular Value Decomposition (HOSVD) [11]. Here, we intend to assess the performance of the proposed approach in the presence of corrupted datasets, as well as its error-robustness.
The dataset is folded as a three-dimensional tensor with size N 1 × N 2 × M , i.e., R + 1 = 3 . For simplicity, we set N 1 = N 2 = 8 such that the number of features is given by N 1 · N 2 = N = 64 . The dataset is split into training, validation and testing sets, with proportion 60:20:20. The validation set is used for hyperparameter tuning, whereas the testing set is used only once for performance evaluation. In addition, we also evaluate the proposed technique for different training dataset sizes.
In accordance with the literature about corrupted datasets [8], we adopt the following error generation process: for each feature X : , n for n = 1 , , N , EL % of the instances are corrupted with Gaussian noise with mean zero and standard deviation ( max ( X : , n ) min ( X : , n ) ) / 5 . We simulate a total of 100 different experiments, using the decision tree (DT), random forest (RF) and gradient boosting (GB) as machine learning classifiers. The R-D MDL scheme [25] is applied to estimate the multilinear rank of the training and testing datasets.
Table 4 shows the accuracy, detection rate, false alarm rate, area under the precision–recall curve and Matthews correlation coefficient as a function of the error level (EL). The EL ranges from 10 % to 30 % . For each error level and ML classifier, the best metric values are highlighted in bold. From the results shown in Table 4, it is clear that the proposed scheme outperforms its competitor methods for all EL range. In addition, even in high error level conditions, e.g., EL = 30 % , the proposed technique presents outstanding results, with Acc = 98.94 % , DR = 97.70 % , FAR = 4.35 % , AUPRC = 0.9937 and MCC = 0.9663 when the random forest algorithm is applied for classification. Furthermore, we observe that the AUPRC is higher than 0.98 for all EL range when using RF and GB classifiers, which reflects a considerable trade-off between the true positive rate and positive predictive values. Therefore, from Table 4, we note that the proposed technique presents a considerable performance along the whole error level range.
Next, the proposed approach is compared with the HOOI and HOSVD schemes when the training size proportion (TSP) ranges from 20 % to 70 % of all available instances, with error-level fixed in 20 % . Table 5 shows the Acc, DR, FAR, AUPRC and MCC for different values of TSP. For each training size proportion and ML classifier, we highlight in bold the best metric values that were obtained. It can be observed that the proposed scheme delivers significantly better results when compared to its competitor methods in all TSP range, showing outstanding metric values. Note that, even with small training datasets, e.g., TSP = 20 % , the proposed approach presents Acc, DR, FAR, AUPRC and MCC equal to 99.18 % , 98.85 % , 1.71 % , 0.9976 and 0.9746 , respectively, when RF is applied for classification. Therefore, our proposed approach shows considerable performance, even when trained with small data.
Finally, the error-robustness evaluation results of the proposed scheme as well as the HOOI and HOSVD approaches are presented. The same simulation parameters adopted in the experiments of Table 4 are considered. Figure 2 illustrates the relative loss of accuracy, as a function of the error level, for each compared technique and different ML classifiers. As expected, all techniques presented an improved performance for lower error levels, in which the datasets present lower corruption. Furthermore, note that the proposed approach shows outstanding metric values for all EL range. As shown in Figure 2, the RLA is approximately zero when the error level is 10 % , and is lower than 12 % for EL = 30 % , regardless of the classifier. In this sense, it can be seen that the proposed approach shows a considerable error-robustness when compared to HOSVD and HOOI low-rank approximation techniques.

6.2. Discussion

In this paper, we compare the proposed technique with architectures in which the HOSVD and HOOI schemes are previously applied to the dataset tensor X R N 1 × × N R × M for denoising. The HOSVD is a generalization of the matrix Singular Value Decomposition to higher-order tensors and is widely applied for noise reduction. In this case, an ( R + 1 ) -th dimensional tensor X is decomposed into a core tensor and R + 1 factor matrices truncated to the signal subspace, which is determined by the multilinear rank ( d 1 , , d R + 1 ) . On the other hand, HOOI is a low-rank approximation method in which more accurate truncated singular matrices and core tensor are computed through higher order orthogonal iterations.
Decision tree, random forest and gradient boosting are adopted as ML classifiers. Despite its low computational cost and ease of understanding and interpretation, decision tree presents high variance, i.e., completely different trees can be generated from tiny changes in the training dataset. When trained with corrupted data or small datasets, DTs can lead to overfitting. Such fact can be seen in Table 4 and Table 5, in which DTs are outperformed by both GB and RF, especially for high error levels and small TSP. For instance, in Table 4, for EL = 25 % , the proposed technique presents values of MCC for DT, GB and RF classifiers equal to 62.68 % , 97.68 % and 97.81 % , respectively. As expected, all compared techniques deliver better performance when RF and GB are used for classification, since both algorithms reduce the variance existing in DTs and prevent overfitting. Random forest and gradient boosting combine multiple DTs, but with different tree-building processes: while the former builds each tree independently and combines results at the end, the latter builds one tree at a time, combining results during the process.
Furthermore, from Table 4, it can be observed that, for all compared schemes, RF outperforms GB for almost all EL range. Since gradient boosting combines the results along the process, it is more sensitive to data corruption, resulting in overfitting. For example, for EL = 30 % , the values of DR for random forest and gradient boosting when considering our proposed approach are, respectively, 97.70 % and 96.02 % . Such a fact is more evident in HOSVD, which presents detection rates of 92.67 % and 85.27 % for RF and GB, respectively. In addition, from the results shown in Table 4, note that the proposed scheme outperforms both HOSVD and HOOI techniques for all EL range. Such results confirm that ML classifiers benefit from the more discriminative individual information resulting from the average common feature extraction technique applied on the training dataset. Note that HOSVD is also outperformed by HOOI for high error levels, confirming that the latter scheme leads to better results due to the more accurate core tensor and singular matrices generated through alternating least squares decomposition methods. In short, the compared techniques present better performance as the error level is lower. In this case, the machine learning classifiers deal with less corrupted data and, consequently, deliver more reliable and accurate results.
Additionally, from Table 5, we observe that, in general, the compared techniques present better performance as the training size proportion is higher. Small training datasets can lead to a lack of representative instances and, consequently, to overfitting. In this case, the ML algorithm is excessively adjusted to the training data, performing poorly in predicting new instances. As mentioned above, such a fact is more evident in decision trees, which are more prone to overfitting. For instance, when considering the smallest training dataset size, i.e., TSP = 20 % , the values of AUPRC for the proposed scheme, HOSVD and HOOI when DT is applied for classification are, respectively, 0.7804 , 0.7437 and 0.6099 . On the other hand, the proposed approach is very robust against small training dataset sizes when gradient boosting and random forests are used for classification. For example, still considering the worst case of TSP = 20 % , the AUPRC for the proposed scheme when GB and RF are applied are, respectively, 0.9953 and 0.9976 . However, both HOSVD and HOOI present a performance reduction in this case, showing AUPRC of 0.8168 and 0.9336 for gradient boosting, and 0.9845 and 0.9869 for the random forest, respectively.
Finally, the error-robustness of all compared approaches is assessed in Figure 2, in which the relative loss of accuracy is illustrated. By observing Figure 2c, we observe that all schemes are more robust against errors for random forest classifier when compared to DT and GB algorithms. For instance, considering the worst case of EL = 30 % , the proposed technique presents RLA of 0.98 % when RF is applied for classification. On the other hand, for GB and DT, our approach shows relative loss of accuracy of 2.29 % and 12.52 % , respectively. Therefore, once again we observe that random forest outperforms the DT and GB algorithms in DDoS attack detection.

6.3. Performance Comparison with Related Works

This subsection presents the performance comparison between the proposed scheme and related works assuming error-free conditions. Furthermore, since CICIDS2017 has been extensively applied for IDS validation by several papers in the literature, we also include the performance evaluation on such dataset. Consequently, the comparison with related researches is enriched due to the higher number of competing schemes.
Since the related papers assume error-free datasets, the proposed approach is considered with error level 0 % for comparison. Table 6 shows the adopted dataset, the ML classification algorithm and the values of accuracy, detection rate and false alarm rate obtained by the proposed approach and the related papers. The metrics represented as "Not Available" (N/A) were not informed by the corresponding paper. Furthermore, since CICDDoS2019 is a new released dataset, to the best of our knowledge only Elsayed et al. [28] applied such data for performance evaluation. The authors proposed a deep learning-based intrusion detection system in which a recurrent neural network is combined with an autoencoder. Note that, considering the CICDDoS2019 dataset, the proposed technique outperforms the competing scheme when GB and RF algorithms are applied for classification. Our approach presents Acc = 99.87 % and DR = 99.86 % for gradient boosting, whereas accuracy and detection rate of 99.55 % and 98.96 % were obtained when using random forest classifier.
On the other hand, as above mentioned, CICIDS2017 was applied by several authors for IDS performance evaluation, as it can be seen in Table 6. Although it is not the best IDS among the compared ones, the proposed scheme still presents a considerable performance, with Acc = 99.95 % , DR = 99.95 % and FAR = 0.05 % for gradient boosting algorithm, outperforming almost all competitor schemes. It is worth to mention the performance shown by LUCID, proposed by Doriguzzi-Corin et al. in [29], with Acc, DR and FAR of 99.67 % , 99.94 % and 0.59 % , respectively. The authors presented a practical, lightweight CNN-based DDoS detection architecture with low processing overhead and attack detection time. In addition, the 1D-CNN-LSTM model, proposed by Roopak et al. [30], showed a considerable detection rate of 99.10 % . Note that both papers propose deep learning-based schemes, which usually deliver better performance when compared to traditional machine learning-based solutions, such as the DT, RF and GB algorithms.

6.4. Computational Complexity

This section discusses the computational complexity of the proposed approach, described in Algorithm 1. For simplicity, the complexity is analyzed for a three-dimensional dataset tensor X R N 1 × N 2 × M . We only consider the most costly calculations, represented by Steps 1 to 3 of Algorithm 1, as a function of the most important variables, namely, N 1 , N 2 , M and ( d 1 , d 2 , d 3 ) . Consequently, the computational cost related to folding and unfolding of matrices and tensors, performed in Steps 4 and 9 of Algorithm 1, are not considered since such functions are about data representations. Similarly, the time complexity of the Steps 5–8 of Algorithm 1 are not analyzed, since low-cost computations are performed in such steps.
Step 1 of Algorithm 1 corresponds to the HOSVD of the dataset tensor X and presents computational complexity given by [33]:
O [ HOSVD ] = O j = 1 3 N j k = 1 3 N k + j = 1 3 k = 1 j d k k = j 3 N k
where, for simplicity of notation, N 3 corresponds to the number of dataset instances M.
Next, in Steps 2 and 3 of Algorithm 1, we compute the common feature tensor as well as its average along the 3-rd dimension. Such steps require two tensor times matrix products plus the average calculation, and present complexity given by:
O [ CF ] = O [ N 1 2 N 2 d 3 ] + O [ N 1 N 2 2 d 3 ] + O [ N 1 N 2 d 3 ]
Finally, the overall computational complexity of Algorithm 1 corresponds to the sum of the above mentioned complexities,
O [ Final ] = O [ HOSVD ] + O [ CF ]
In Table 7, we summarize the computational complexities of the proposed approach as well as the HOOI and HOSVD techniques. For HOOI, I corresponds to the number of iterations and d = max ( d 1 , d 2 , d 3 ) . Note that the proposed scheme is accompanied by an increase of computational complexity, which reinforces the trade-off between the more accurate DDoS attack detection and the time cost.

7. Conclusions

In this paper, we propose a novel average common feature extraction technique applied on DDoS attack detection. Initially, the proposed scheme filter out, from the dataset, the average value of the common features among instances by applying the classic Higher-Order Singular Value Decomposition. Finally, the filtered dataset is sent to machine learning algorithms where data are classified as benign traffic or DDoS attack.
Extensive numerical simulations are performed on the CICDDoS2019 and CICIDS2017 benchmark datasets, whereas decision tree, random forest and gradient boosting are used as ML classifiers. Further, accuracy, detection rate, false alarm rate, area under the precision–recall curve, Matthews correlation coefficient and relative loss of accuracy are adopted as evaluation metrics. According to the obtained results, the proposed scheme outperforms the traditional HOSVD and HOOI techniques, presenting a higher error-robustness. For instance, considering a dataset corruption level of 30 % , the proposed scheme shows values of Acc, DR, FAR, AUPRC and MCC of 98.94 % , 97.70 % , 4.35 % , 0.9937 and 0.9663 , respectively, when random forest algorithm is used for classification. In the same conditions, the traditional HOOI technique shows Acc, DR, FAR, AUPRC and MCC equal to 97.65 % , 94.19 % , 11.52 % , 0.9906 and 0.9250 , respectively. In addition, we observe that our proposed scheme presents high robustness against small training datasets, showing a slight loss of performance along the whole evaluated TSP. For example, when the training dataset size is only 20 % of all available samples, the proposed approach shows Acc, DR, FAR, AUPRC and MCC equal to 99.18 % , 98.85 % , 1.71 % , 0.9976 and 0.9746 , respectively, for random forest classifier. On the other hand, considering the same TSP, the well-known HOSVD scheme presents values of Acc, DR, FAR, AUPRC and MCC of 97.55 % , 95.20 % , 8.71 % , 0.9845 and 0.9227 , respectively. However, an important drawback of our proposed scheme is its higher computational complexity, which reflects the trade-off between the more accurate DDoS attack detection and the time cost.
Another considerable finding corresponds to the performance of the evaluated ML classification algorithms for DDoS attack detection. According to simulations, decision trees are more prone to overfitting when data are highly corrupted or small datasets are used for training. For example, for a data corruption level of 25 % , the proposed technique presents a detection rate of 80.23 % when DTs are used for classification, whereas 98.57 % and 98.66 % are obtained with GB and RF, respectively. Similarly, for a training dataset size proportion of 30 % , our approach obtained accuracies of 98.55 % and 98.95 % with GB and RF algorithms, while Acc = 85.95 % when decision tree is applied. Additionally, it is observed that the random forest classifier presents higher error-robustness when compared to gradient boosting. For instance, considering a data corruption level of 20 % , our proposed scheme shows a relative loss of accuracy of 0.61 % when RF is applied for classification, whereas 2.09 % is obtained for GB. Therefore, it is shown that gradient boosting is more sensitive to data corruption when compared to random forest, since the former scheme builds one tree at a time and combines results along the process, whereas the latter builds each tree independently, combining results at the end.
In the future, we intend to apply the proposed technique by using alternative machine learning algorithms, especially deep learning-based approaches, such as convolutional neural networks. Furthermore, we shall verify the performance of the proposed scheme for online DDoS attack detection.

Author Contributions

Conceptualization, J.P.A.M., J.P.C.L.d.C. and R.T.d.S.J.; methodology, J.P.C.L.d.C. and E.P.d.F.; software, J.P.A.M.; validation, E.J. and E.P.d.F.; formal analysis, J.P.A.M.; investigation, J.P.A.M. and J.P.C.L.d.C.; resources, R.T.d.S.J.; writing—original draft preparation, J.P.A.M.; writing—review and editing, J.P.C.L.d.C. and E.P.d.F.; supervision, J.P.C.L.d.C.; project administration, J.P.C.L.d.C.; funding acquisition, R.T.d.S.J. All authors have read and agreed to the published version of the manuscript.

Funding

Publication fees were honored by the University of Brasilia Post-Graduate Program on Electrical Engineering (PPGEE/UnB), with resources from CAPES-Brazilian Higher Education Personnel Improvement Coordination (PROAP). This work was supported in part by CNPq-Brazilian National Research Council (Grants 303343/2017-6 PQ-2, 312180/2019-5 PQ-2, BRICS2017-591 LargEWiN, and 465741/2014-2 INCT on Cybersecurity), in part by CAPES- Brazilian Higher Education Personnel Improvement Coordination (Grants PROAP PPGEE/UnB, 23038.007604/2014-69 FORTE, and 88887.144009/2017-00 PROBRAL), in part by FAP-DF-Brazilian Federal District Research Support Foundation (Grant 0193.001366/2016 UIoT, and Grant 0193.001365/2016 SSDDC), in part by the Brazilian Ministry of the Economy (Grant 005/2016 DIPLA, and Grant 083/2016 ENAP), in part by the Institutional Security Office of the Presidency of Brazil (Grant ABIN 002/2017), in part by the Administrative Council for Economic Defense (Grant CADE 08700.000047/2019-14), and in part by the General Attorney of the Union (Grant AGU 697.935/2019).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AccAccuracy
AUPRCArea Under the Precision-Recall Curve
CICCanadian Institute for Cybersecurity
CNNConvolutional Neural Network
CPSCyber-Physical System
CPUCentral Processing Unit
DDoSDistributed Denial of Service
DoSDenial of Service
DNSDomain Name System
DRDetection Rate
DTDecision Tree
FARFalse Alarm Rate
GBGradient Boosting
HOOIHigher Order Orthogonal Iteration
HOSVDHigher Order Singular Value Decomposition
HTTPHyperText Transfer Protocol
ICMPInternet Control Message Protocol
IDSIntrusion Detection System
IoTInternet of Things
IPInternet Protocol
LDAPLightweight Directory Access Protocol
LSTMLong Short Term Memory
MCCMatthews Correlation Coefficient
MDLMinimum Description Length
MLMachine Learning
MOSModel Order Selection
MSSQLMicrosoft Structured Query Language
NaNNot a Number
NetBIOSNetwork Basic Input/Output System
NTPNetwork Time Protocol
N/ANot Available
OSIOpen System Interconnection
R-D MDLR-Dimensional Minimum Description Length
RFRandom Forest
RLARelative Loss of Accuracy
SNMPSimple Network Management Protocol
SQL InjectionStructured Query Language Injection
SSDPSimple Service Discovery Protocol
TCPTransmission Control Protocol
TFTPTrivial File Transfer Protocol
TSPTraining Size Proportion
UDPUser Datagram Protocol
XSSCross-Site Scripting

References

  1. Han, S.; Xie, M.; Chen, H.; Ling, Y. Intrusion detection in Cyber-Physical Systems: Techniques and challenges. IEEE Syst. J. 2014, 8, 1052–1062. [Google Scholar] [CrossRef]
  2. Lee, E.A. CPS Foundations. In Proceedings of the 47th Design Automation Conference, Anaheim, CA, USA, 13–18 June 2010; pp. 737–742. [Google Scholar] [CrossRef]
  3. Sadreazami, H.; Mohammadi, A.; Asif, A.; Plataniotis, K.N. Distributed-graph-based statistical approach for intrusion detection in Cyber-Physical Systems. IEEE Trans. Signal Inf. Process. Netw. 2018, 4, 137–147. [Google Scholar] [CrossRef]
  4. Wang, H.; Zhao, H.; Zhang, J.; Ma, D.; Li, J.; Wei, J. Survey on Unmanned Aerial Vehicle networks: A Cyber Physical System prspective. IEEE Commun. Surv. Tutor. 2020, 22, 1027–1070. [Google Scholar] [CrossRef] [Green Version]
  5. Vieira, T.P.B.; Tenório, D.F.; da Costa, J.P.C.L.; de Freitas, E.P.; Del Galdo, G.; de Sousa, R.T., Jr. Model order selection and eigen similarity based framework for detection and identification of network attacks. J. Netw. Comput. Appl. 2017, 90, 26–41. [Google Scholar] [CrossRef]
  6. Wang, M.; Lu, Y.; Qin, J. A dynamic MLP-based DDoS attack detection method using feature selection and feedback. Comput. Secur. 2020, 88, 101645. [Google Scholar] [CrossRef]
  7. Jiang, J.; Yu, Q.; Yu, M.; Li, G.; Chen, J.; Liu, K.; Liu, C.; Huang, W. ALDD: A hybrid traffic-user behavior detection method for application layer DDoS. In Proceedings of the 2018 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/12th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), New York, NY, USA, 1–3 August 2018; pp. 1565–1569. [Google Scholar] [CrossRef]
  8. Saez, J.A.; Galar, M.; Luengo, J.; Herrera, F. Tackling the problem of classification with noisy data using Multiple Classifier Systems: Analysis of the performance and robustness. Inf. Sci. 2013, 247, 1–20. [Google Scholar] [CrossRef]
  9. Li, F.; Tang, Y. False Data Injection Attack for Cyber-Physical Systems With Resource Constraint. IEEE Trans. Cybern. 2020, 50, 729–738. [Google Scholar] [CrossRef]
  10. Kisil, I.; Calvi, G.G.; Mandic, D.P. Tensor valued common and individual feature extraction: Multi-dimensional perspective. arXiv 2017, arXiv:1711.00487. [Google Scholar]
  11. Rajwade, A.; Rangarajan, A.; Banerjee, A. Image denoising using the Higher Order Singular Value Decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 849–862. [Google Scholar] [CrossRef]
  12. Lathauwer, L.D.; Moor, B.D.; Vandewalle, J. On the best rank-1 and rank-(R1,R2,…,RN) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 2000, 21, 1324–1342. [Google Scholar] [CrossRef]
  13. Hosseini, S.; Azizi, M. The hybrid technique for DDoS detection with supervised learning algorithms. Comput. Netw. 2019, 158, 35–45. [Google Scholar] [CrossRef]
  14. Lima Filho, F.S.; Silveira, F.A.F.; Brito, A.M., Jr.; Vargas-Solar, G.; Silveira, L.F. Smart Detection: An online approach for DoS/DDoS attack detection using machine learning. Secur. Commun. Netw. 2019, 2019, 1574749. [Google Scholar] [CrossRef]
  15. Amouri, A.; Alaparthy, V.T.; Morgera, S.D. A machine learning based intrusion detection system for mobile Internet of Things. Sensors 2020, 20, 461. [Google Scholar] [CrossRef] [Green Version]
  16. Galeano-Brajones, J.; Carmona-Murillo, J.; Valenzuela-Valdés, J.F.; Luna-Valero, F. Detection and mitigation of DoS and DDoS attacks in IoT-based stateful SDN: An experimental approach. Sensors 2020, 20, 816. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Praseed, A.; Thilagam, P.S. DDoS attacks at the application layer: Challenges and research perspectives for safeguarding web applications. IEEE Commun. Surv. Tutor. 2019, 21, 661–685. [Google Scholar] [CrossRef]
  18. Vishwakarma, R.; Jain, A.K. A survey of DDoS attacking techniques and defence mechanisms in the IoT network. Telecommun. Syst. 2020, 73, 3–25. [Google Scholar] [CrossRef]
  19. Dantas Silva, F.S.; Silva, E.; Neto, E.P.; Lemos, M.; Neto, A.J.V.; Esposito, F. A taxonomy of DDoS attack mitigation approaches featured by SDN technologies in IoT scenarios. Sensors 2020, 20, 3078. [Google Scholar] [CrossRef]
  20. Canadian Institute for Cybersecurity. DDoS Evaluation Dataset (CICDDoS2019). 2019. Available online: https://www.unb.ca/cic/datasets/ddos-2019.html (accessed on 10 June 2020).
  21. Canadian Institute for Cybersecurity. Intrusion Detection Evaluation Dataset (CICIDS2017). 2017. Available online: https://www.unb.ca/cic/datasets/ids-2017.html (accessed on 10 June 2020).
  22. Sharafaldin, I.; Lashkari, A.H.; Hakak, S.; Ghorbani, A.A. Developing realistic Distributed Denial of Service (DDoS) attack dataset and taxonomy. In Proceedings of the 2019 International Carnahan Conference on Security Technology (ICCST), Chennai, India, 1–3 October 2019; pp. 1–8. [Google Scholar] [CrossRef]
  23. Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A. Toward generating a new intrusion detection dataset and intrusion traffic characterization. In Proceedings of the 4th ICISSP, Madeira, Portugal, 22–24 January 2018; pp. 108–116. [Google Scholar] [CrossRef]
  24. Zhou, G.; Cichocki, A.; Zhang, Y.; Mandic, D.P. Group component analysis for multiblock data: Common and individual feature extraction. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2426–2439. [Google Scholar] [CrossRef] [Green Version]
  25. Da Costa, J.P.C.L.; Roemer, F.; Haardt, M.; de Sousa, R.T., Jr. Multi-dimensional model order selection. EURASIP J. Adv. Signal Process. 2011, 2011, 1–13. [Google Scholar] [CrossRef] [Green Version]
  26. Kisil, I.; Calvi, G.; Cichocki, A.; Mandic, D.P. Common and individual feature extraction using tensor decompositions: A remedy for the curse of dimensionality? In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 6299–6303. [Google Scholar] [CrossRef]
  27. Kossaifi, J.; Panagakis, Y.; Anandkumar, A.; Pantic, M. TensorLy: Tensor learning in Python. arXiv 2016, arXiv:1610.09555. [Google Scholar]
  28. Elsayed, M.S.; Le-Khac, N.A.; Dev, S.; Jurcut, A.D. DDoSNet: A deep-learning model for detecting network attacks. arXiv 2020, arXiv:2006.13981. [Google Scholar]
  29. Doriguzzi-Corin, R.; Millar, S.; Scott-Hayward, S.; Martínez-del-Rincón, J.; Siracusa, D. LUCID: A practical, lightweight deep learning solution for DDoS attack detection. IEEE Trans. Netw. Serv. Manag. 2020, 17, 876–889. [Google Scholar] [CrossRef] [Green Version]
  30. Roopak, M.; Yun Tian, G.; Chambers, J. Deep learning models for cyber security in IoT networks. In Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 7–9 January 2019; pp. 452–457. [Google Scholar] [CrossRef]
  31. Lopez, A.D.; Alma, D.; Mohan, A.P.; Nair, S. Network traffic behavioral analytics for detection of DDoS attacks. SMU Data Sci. Rev. 2019, 2, 1–24. Available online: https://scholar.smu.edu/datasciencereview/vol2/iss1/14 (accessed on 12 May 2020).
  32. Aamir, M.; Zaidi, S.M.A. Clustering based semi-supervised machine learning for DDoS attack classification. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  33. Minster, R.; Saibaba, A.K.; Kilmer, M.E. Randomized algorithms for low-rank tensor decompositions in the Tucker format. arXiv 2019, arXiv:1905.07311. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Proposed average common feature extraction technique for DDoS attack detection in Cyber–Physical Systems (CPSs). For simplicity, we depict the filtering process of a three-dimensional dataset tensor X R N 1 × N 2 × M .
Figure 1. Proposed average common feature extraction technique for DDoS attack detection in Cyber–Physical Systems (CPSs). For simplicity, we depict the filtering process of a three-dimensional dataset tensor X R N 1 × N 2 × M .
Sensors 20 05845 g001
Figure 2. Plots of relative loss of accuracy, as a function of the error level, for the following machine learning (ML) classifiers: (a) decision tree, (b) gradient boosting, (c) random forest.
Figure 2. Plots of relative loss of accuracy, as a function of the error level, for the following machine learning (ML) classifiers: (a) decision tree, (b) gradient boosting, (c) random forest.
Sensors 20 05845 g002
Table 1. Related works.
Table 1. Related works.
Works related to multilinear algebra
PaperAimProposed SolutionProsCons
Kisil et al. [10]- Image classification.- Common and individual feature extraction technique based on LL1 tensor decomposition.- Flexible
- Not restricted to images of the same dimensions.
- Tensor-based solution.
- High computational complexity.
- Corrupted datasets are not considered.
Rajwade et al. [11]- Image denoising and classification.- Patch-based ML technique for image denoising by applying HOSVD.- Outstanding performance on grayscale and color images.
- Tensor-based solution.
- Limited denoising performance.
Lathauwer et al. [12]- Estimation of the best rank-
( R 1 , , R N ) approximation of tensors.
- HOOI low-rank approximation algorithm.- Outperforms HOSVD in the estimation of singular matrices and core tensor.
- Tensor-based solution.
- High computational complexity.
Works related to DDoS attack detection
PaperAimProposed SolutionProsCons
Vieira et al. [5]- Detection and identification of network attacks, including DDoS.- Framework for detecting and identifying network attacks using model order selection, eigenvalues and similarity analysis.- Outstanding accuracy for timely detection and identification of TCP and UDP ports under attack.- Corrupted datasets are not considered.
- Not based on ML techniques.
Hosseini and Azizi [13]- DDoS attack detection.- Hybrid framework based on data stream approach for detecting DDoS attacks.- Computational process divided between client and proxy.
- Early attack detection.
- Corrupted datasets are not considered.
Lima Filho et al. [14]- DDoS attack detection.- RF based DDoS detection system for early identification of TCP flood, UDP flood and HTTP flood.- Early identification of volumetric attacks.
- Packet inspection is not required.
- Corrupted datasets are not considered.
Wang et al. [6]- DDoS attack detection.- Feature selection combined with MLP.
- Feedback mechanism to reconstruct the IDS according to detection errors.
- Feedback mechanism perceives errors based on recent detection results.- Global optimal features are not necessarily found.
- Corrupted datasets are not considered.
Table 2. Mathematical symbols along this paper.
Table 2. Mathematical symbols along this paper.
SymbolDefinitionSymbolDefinition
X Dataset matrix G Core tensor
X 0 Error-free dataset matrix X ˜ Common feature tensor
N Error matrix X ¯ Average common feature tensor
X m , : m-th dataset instance C Weight tensor
X : , n n-th dataset feature X [ f ] Filtered dataset tensor
A r r-th factor matrix [ X ] ( r ) r-th mode unfolding matrix of X
R xx Covariance matrix y m Class label of X m , :
E Eigenvector matrixMNumber of instances
Λ Eigenvalue matrix M tr Number of training instances
y Class label vector M te Number of testing instances
y ^ Predicted class label vectorNNumber of features
X Dataset tensor N r Number of features along the r-th dimension
X tr Training dataset tensor R + 1 Order of X
X te Testing dataset tensor ( d 1 tr , , d R + 1 tr ) Multilinear rank of X tr
X 0 Error-free dataset tensor ( d 1 te , , d R + 1 te ) Multilinear rank of X te
N Error tensor ( λ 1 , , λ N ) Eigenvalues of R xx
Table 3. DDoS attack types and the corresponding number of instances for each dataset.
Table 3. DDoS attack types and the corresponding number of instances for each dataset.
DatasetTraffic FileTraffic TypeTotal
Legitimate32,000
DNS-based DDoS800
LDAP-based DDoS800
MSSQL-based DDoS800
NetBIOS-based DDoS800
CICDDoS201912 January 2019NTP-based DDoS800
SNMP-based DDoS800
SSDP-based DDoS800
UDP flood800
TCP SYN flood800
TFTP-based DDoS800
CICIDS20173 July 2017Legitimate32,000
7 July 2017DDoS LOIC8000
Table 4. Performance evaluation for different error levels.
Table 4. Performance evaluation for different error levels.
ELModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.94920.99580.99590.13080.01720.01470.84070.98660.98710.88550.99830.99750.91880.99090.9919
10%HOSVD [11]0.86050.96590.98390.17010.09490.07660.63110.89220.94850.74050.94840.99080.84900.94290.9611
HOOI [12]0.93430.97070.98430.13130.05870.07220.79960.90980.94990.85420.93950.99460.90970.95960.9630
ELModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.91210.98920.98570.22010.02940.06810.72720.96660.95450.80760.99440.99740.85850.98220.9654
15%HOSVD [11]0.85010.96080.98080.29160.08080.09250.56060.88200.93860.68550.93680.99020.79660.94510.9531
HOOI [12]0.86660.95010.97680.23240.14640.10490.61950.84050.92570.72790.84910.98370.83310.91370.9460
ELModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.90390.98440.99290.23980.06950.02840.69860.95020.97740.78290.99370.99660.84960.96400.9849
20%HOSVD [11]0.80230.95170.97080.50400.22310.13480.39320.84270.90630.56990.96650.97950.68670.88580.9310
HOOI [12]0.65430.95380.95820.42270.15440.09300.22640.85210.87000.50190.91760.96170.62520.91300.9389
ELModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.87190.99270.99310.31250.02580.02400.62680.97680.97810.74130.99540.99660.80230.98570.9866
25%HOSVD [11]0.68820.89810.97110.61800.13650.09420.12800.72450.90830.39060.70810.97220.57260.88500.9465
HOOI [12]0.80230.88890.98160.42810.31980.08570.41980.65850.94120.58840.78040.98770.71540.81020.9562
ELModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.85320.97590.98940.17820.06550.04350.61790.92660.96630.73350.98010.99370.84140.96020.9770
30%HOSVD [11]0.73280.92380.97010.52210.26470.14490.25540.74960.90420.47960.86750.98780.63660.85270.9267
HOOI [12]0.79320.97170.97650.69980.08680.11520.27650.91020.92500.48180.92870.99060.60720.94960.9419
Table 5. Performance evaluation for different training size proportion, for a error level of 20 % .
Table 5. Performance evaluation for different training size proportion, for a error level of 20 % .
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.82670.98900.99180.19380.04570.01710.59350.96540.97460.78040.99530.99760.81900.97600.9885
20%HOSVD [11]0.88330.88680.97550.47850.27860.08710.61080.65140.92270.74370.81680.98450.74780.82490.9520
HOOI [12]0.78050.93600.97400.34570.11510.04220.42960.81150.92070.60990.93360.98690.73320.91680.9679
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.85950.98550.98950.06600.06600.04540.63360.84050.96710.78170.99150.99480.83990.96620.9764
30%HOSVD [11]0.78310.95650.97030.26180.08560.11410.46640.87090.90570.63820.88040.97050.76630.94080.9387
HOOI [12]0.75960.92870.95580.40720.23520.21820.35500.77040.85920.55340.90720.98410.69710.86730.8906
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.82520.98470.98450.19770.05420.04940.58600.95180.95130.77430.98820.99220.81660.97010.9718
40%HOSVD [11]0.75880.91700.97350.43380.07280.11940.34110.77690.91540.54160.90220.98010.68640.92090.9385
HOOI [12]0.58390.89130.96100.62490.25760.17600.66350.66380.87470.34840.82630.95750.50540.83530.9095
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.90260.98140.99030.26050.07840.03960.69630.94180.96960.80870.96630.99720.84170.95910.9791
50%HOSVD [11]0.72110.94710.98340.47130.19300.07500.27290.83250.94790.50470.94350.99360.64930.89480.9615
HOOI [12]0.83950.93190.96980.34900.08440.08390.52380.81210.90600.66140.78120.97580.76910.92580.9498
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.90620.98150.99430.21030.04390.02090.71800.94200.98200.85230.98330.99720.85150.95870.9886
60%HOSVD [11]0.81340.96230.96230.24320.06730.05430.53530.88980.88820.68200.93300.97810.80350.95710.9561
HOOI [12]0.81990.92510.95580.38150.27140.21540.48620.76510.85980.64170.87300.98380.74460.85160.8918
TSPModelAccFARMCCAUPRCDR
DTGBRFDTGBRFDTGBRFDTGBRFDTGBRF
Proposed0.90650.98260.99370.28170.04270.02870.69280.94580.98000.83230.96580.99780.92770.97310.9853
70%HOSVD [11]0.75270.89020.95780.12650.04900.07810.48850.73910.87080.66990.85730.96880.79820.91310.9443
HOOI [12]0.77270.94550.95990.22220.15840.05940.46680.82590.87910.64210.94070.97360.77460.90640.9526
Table 6. Comparison between the proposed technique and related papers.
Table 6. Comparison between the proposed technique and related papers.
DatasetPaperML AlgorithmAccDRFAR
Proposed schemeDT0.97540.95090.0895
CICDDoS2019Proposed schemeGB0.99870.99860.0016
Proposed schemeRF0.99550.98960.0201
Elsayed et al. [28]RNN+AutoEncoder0.99000.9900N/A
Proposed schemeDT0.99940.99930.0007
Proposed schemeGB0.99950.99950.0005
Proposed schemeRF0.99960.99890.0022
Lopez et al. [31]RF0.9900N/AN/A
Doriguzzi–Corin et al. [29]LUCID0.99670.99940.0059
CICIDS2017Lima Filho et al. [14]RFN/A0.80000.0020
Aamir and Ali Zaidi [32]RF0.9666N/AN/A
Roopak et al. [30]MLP0.86340.8625N/A
Roopak et al. [30]1D-CNN0.95140.9017N/A
Roopak et al. [30]LSTM0.96240.8989N/A
Roopak et al. [30]1D-CNN+LSTM0.97160.9910N/A
Table 7. Time complexity for the proposed approach, as well as the HOSVD and HOOI low-rank approximation techniques.
Table 7. Time complexity for the proposed approach, as well as the HOSVD and HOOI low-rank approximation techniques.
AlgorithmTime Complexity
Proposed Technique O j = 1 3 N j k = 1 3 N k + j = 1 3 k = 1 j d k k = j 3 N k +
+ O [ N 1 2 N 2 d 3 ] + O [ N 1 N 2 2 d 3 ] + O [ N 1 N 2 d 3 ]
HOSVD [11] O j = 1 3 N j k = 1 3 N k + j = 1 3 k = 1 j d k k = j 3 N k
HOOI [12] O [ M 3 d I ] + O [ M 2 d 2 I ] + O [ M 3 d ] + O [ M d 3 ]

Share and Cite

MDPI and ACS Style

Abreu Maranhão, J.P.; Carvalho Lustosa da Costa, J.P.; Pignaton de Freitas, E.; Javidi, E.; Timóteo de Sousa Júnior, R. Error-Robust Distributed Denial of Service Attack Detection Based on an Average Common Feature Extraction Technique. Sensors 2020, 20, 5845. https://doi.org/10.3390/s20205845

AMA Style

Abreu Maranhão JP, Carvalho Lustosa da Costa JP, Pignaton de Freitas E, Javidi E, Timóteo de Sousa Júnior R. Error-Robust Distributed Denial of Service Attack Detection Based on an Average Common Feature Extraction Technique. Sensors. 2020; 20(20):5845. https://doi.org/10.3390/s20205845

Chicago/Turabian Style

Abreu Maranhão, João Paulo, João Paulo Carvalho Lustosa da Costa, Edison Pignaton de Freitas, Elnaz Javidi, and Rafael Timóteo de Sousa Júnior. 2020. "Error-Robust Distributed Denial of Service Attack Detection Based on an Average Common Feature Extraction Technique" Sensors 20, no. 20: 5845. https://doi.org/10.3390/s20205845

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop