Next Article in Journal
Analysis of Weighted Factors Influencing Submarine Cable Laying Depth Using Random Forest Method
Previous Article in Journal
Factors Influencing University Students’ Continuance Intentions towards Self-Directed Learning Using Artificial Intelligence Tools: Insights from Structural Equation Modeling and Fuzzy-Set Qualitative Comparative Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TPSQLi: Test Prioritization for SQL Injection Vulnerability Detection in Web Applications

by
Guan-Yan Yang
1,
Farn Wang
1,*,
You-Zong Gu
1,2,
Ya-Wen Teng
1,
Kuo-Hui Yeh
3,4,*,
Ping-Hsueh Ho
1 and
Wei-Ling Wen
1
1
Department of Electrical Engineering, National Taiwan University, Taipei City 106319, Taiwan
2
CyberLink Corporation, New Taipei City 231023, Taiwan
3
Institute of Artificial Intelligence Innovation, National Yang Ming Chiao Tung University, Hsinchu City 300093, Taiwan
4
Department of Information Management, National Dong Hwa University, Hualien 974301, Taiwan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8365; https://doi.org/10.3390/app14188365
Submission received: 1 August 2024 / Revised: 7 September 2024 / Accepted: 14 September 2024 / Published: 17 September 2024
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
The rapid proliferation of network applications has led to a significant increase in network attacks. According to the OWASP Top 10 Projects report released in 2021, injection attacks rank among the top three vulnerabilities in software projects. This growing threat landscape has increased the complexity and workload of software testing, necessitating advanced tools to support agile development cycles. This paper introduces a novel test prioritization method for SQL injection vulnerabilities to enhance testing efficiency. By leveraging previous test outcomes, our method adjusts defense strength vectors for subsequent tests, optimizing the testing workflow and tailoring defense mechanisms to specific software needs. This approach aims to improve the effectiveness and efficiency of vulnerability detection and mitigation through a flexible framework that incorporates dynamic adjustments and considers the temporal aspects of vulnerability exposure.

1. Introduction

This section provides an overview of the paper’s background, motivation, contributions, and organization.

1.1. Background

Over the past two decades, the integration of network technology into daily life has significantly increased, marking a notable trend [1]. With the growing dependence on the web, web applications have become indispensable for modern data management and processing. This evolution allows developers to efficiently design, maintain, and secure applications by modifying existing code rather than building entire programs from scratch. This streamlined approach not only enhances the protection of valuable business assets, but also transforms the Internet into a vast repository of information. Despite these advancements, web applications remain susceptible to numerous security threats, including Structured Query Language (SQL) injection, cross-site scripting (XSS), and cross-site request forgery (CSRF) [2,3].
Exploiting these vulnerabilities enables attackers to compromise user information and damage organizational reputations by breaching security policies. The primary objective of cybersecurity is to protect networks and resources from unauthorized access, ensuring the principles of integrity, availability, and confidentiality are upheld.
Recent reports, such as the OWASP Top 10 project published in 2021 [4], indicate that injection attacks rank among the top three common vulnerabilities in software projects. SQL injection attacks, in particular, exploit inadequate input validation and poor website management, allowing attackers to inject malicious SQL statements into queries generated by web applications. This can lead to unauthorized access to backend databases and exposure to sensitive information, such as usernames, passwords, email addresses, phone numbers, and credit card details. Additionally, attackers can alter database schemas and content, exacerbating the potential damage.
Various types of SQL injection attacks have been identified [5], and numerous methods for detecting and preventing them have been proposed. However, existing solutions often have limitations and may not fully address the evolving nature of these attacks. As new attack vectors emerge, it is crucial to identify and understand current technologies to develop more effective countermeasures.

1.2. Motivation

Software testing is a crucial and costly industry activity, often accounting for over 50% of the total software development cost [6]. To address this challenge in testing SQL injection for agile software development, we propose a software testing tool for automatic testing, activated upon the tester’s submission of the software version and relevant information. This tool customizes defense functions tailored to the specific software under test, enhancing the testing process’s efficiency. Following the completion of testing, the tool adjusts the defense strength vector to optimize subsequent tests.
Upon receiving the version information, the tool tests the failed cases in the previous round and further designs a choice based on common tester practices. Prioritizing previously failed test cases allows testers to obtain real-time information about iterative versions promptly. After testing, adjustments to the test process are made to facilitate future tests, thus improving overall testing efficiency.
To demonstrate the impact of test prioritization on time and cost, we conducted a pre-test to measure the time differences using various prioritization methods to identify the first SQL injection vulnerabilities. For instance, in target1 (powerhr), the time difference between the TEUQSB and BUSQET prioritization orders was up to 64 s, as shown in Figure 1. Similar differences were observed for target2 (dcstcs) and target3 (healthy), with time differences exceeding 15 s, as illustrated in Figure 1. This substantial variance underscores the importance of optimizing test prioritization strategies. Each letter in the sequences BUSQET represents a different type of SQL injection, detailed in Table 1, with further explanations provided in Section 2.2 SQL Injection.
Additionally, we evaluated the relevant information available in some open-source tools. We found that in ZAP [7], this problem was listed in the TODO section, clearly indicating in the ZAP documentation that this area requires expansion, as shown in Figure 2. Conversely, SQLMAP [8] uses a fixed BEUSTQ order for testing, indicating that SQLMAP does not significantly incorporate test prioritization, as shown in Figure 3. This evidence highlights the need for tools to adopt more dynamic and optimized test prioritization strategies to enhance testing efficiency and effectiveness.

1.3. Contribution

In this subsection, we outline the critical contributions of our work:
  • We proposed a new algorithm to prioritize SQL Injection vulnerability test cases. This algorithm is part of a comprehensive framework (TPSQLi) that includes the design of weight functions, dynamic adjustments, and evaluation methods to ensure efficient and effective prioritization.
  • We enhanced the testing process in SQLMAP by prioritizing test cases that failed in previous rounds, thereby improving the efficiency and effectiveness of the testing cycle.
  • Our framework (TPSQLi) is designed to adapt to immediate feedback and evolving security threats, ensuring continuous adjustments to testing priorities. This adaptability significantly enhances the efficiency of security testing, particularly for regression testing, ensuring that testing remains relevant and effective in addressing the ever-changing landscape of web application security.
  • Our framework (TPSQLi) performs better than ART4SQLi [9], the current state-of-the-art (SOTA) test prioritization framework, by designing more efficient and adaptable prioritization mechanisms.

1.4. Organization

The rest of this paper is organized as follows: Section 2 introduces the preliminary knowledge essential for our study, providing the foundational background. Section 3 offers a comprehensive review of related research, establishing the context and significance of our research. Section 4 presents the TPSQLi penetration testing framework and the test prioritization algorithm unit, detailing the conceptual design and implementation processes with insights into practical applications. Section 5 discusses the evaluation results, presenting a comparative analysis with the state-of-the-art ART4SQLi [9] across 10 test cases. Finally, Section 6 concludes the paper by summarizing the main findings and their implications.

2. Preliminaries

This section will introduce the fundamental concepts of software testing, test prioritization, and SQL injection, which are essential to understanding our proposed methodology.

2.1. Software Testing

Software testing encompasses any activity aimed at evaluating an attribute or capability of a program or system to determine whether it meets its requirements. Despite its critical role in ensuring software quality and its widespread use by programmers and testers, software testing remains largely an art due to the limited understanding of fundamental software principles. The inherent complexity of software makes comprehensive testing of even moderately complex programs infeasible. Testing extends beyond debugging; it serves various purposes, including quality assurance, verification and validation, and reliability assessment. Additionally, testing can function as a generic metric. Correctness testing and reliability testing represent two primary focus areas within software testing. Ultimately, software testing involves a trade-off between budget, time, and quality, necessitating carefully balancing these factors to achieve optimal results [10,11,12].

2.2. Test Prioritization

In order to reduce software testing costs, testers can prioritize test cases to ensure that the most critical ones are executed early in the testing process. This prioritization becomes particularly important during the maintenance and evolution phases, which are the stages where updates and changes are made to the software, of the software development life cycle, where regression testing is essential to confirm that recent changes have not adversely affected previous software versions, and that the new version remains backward compatible. Testing typically accounts for nearly half the total software development cost, making it a time-intensive and costly endeavor [10].
As sources [13,14,15] indicated, the regression testing process can be optimized through three primary methods: Test Case Selection, Test Suite Minimization, and Test Case Prioritization.
  • Test Case Selection: This method selects test cases based on specific criteria, focusing only on the updated areas of the software.
  • Test Suite Minimization: This approach reduces the overall test suite by eliminating redundant or obsolete test cases, optimizing testing resources, and minimizing the testing effort required.
  • Test Case Prioritization: This technique involves ordering test cases based on specific attributes, ensuring that higher-priority test cases, which are those that cover critical functionalities or have a high likelihood of failure, are executed first [12,14,16,17].
Unlike Test Case Selection and Test Suite Minimization, which alter the original test suite by removing cases, Test Case Prioritization only reorders the test cases without eliminating any. This distinction is crucial because some test cases, though unnecessary for specific releases, may be valuable in future versions. As a result, prioritizing test cases instead of permanently removing them is often a safer strategy, providing a sense of security in the testing process. Therefore, Test Case Prioritization is considered a secure, reliable, and cost-effective approach to regression testing [18].

2.3. SQL Injection

SQL Injection is a prevalent network attack method [4,19]. It involves embedding malicious instructions within input strings to manipulate SQL syntax logic, aiming to unlawfully disrupt and infiltrate database servers. Such attacks can lead to the theft of confidential user information, account details, and passwords. We collected common SQL injection attack types from [19,20,21,22,23,24], detailed below:
  • Boolean-based Blind SQL Injection: This method is one of the most common and dangerous types of SQL injection. It injects conditional SQL statements that force the SQL command to constantly evaluate as true, such as (‘1’ = ‘1’). This technique is primarily used to bypass authentication processes, allowing unauthorized access to the database.
  • Union-based SQL Injection: In a union query attack, the attacker inserts an additional statement into the original SQL string. This injection is achieved by appending a UNION query string or a similar statement to a web form input. The result is that the database returns a dataset that combines the original query and the injected query results, thereby exposing additional data.
  • Time-based Blind SQL Injection: This technique sends SQL queries that force the database to pause for a specified duration before responding. An attacker can infer information about the database by observing these delays, exploiting timing delays to gather data without direct feedback.
  • Error-based SQL Injection: This attack depends on the error messages generated by the database server. These messages often reveal information about the database’s structure, which can be used to craft further attacks.
  • Stack-based SQL Injection: This involves injecting multiple SQL statements in a single query, separated by a delimiter such as a semicolon. The database executes these statements sequentially, allowing attackers to perform several actions with one injection.
  • Inline Queries SQL Injection: Inline queries involve embedding malicious sub-queries within the main SQL query. These sub-queries can manipulate the database operation, often leading to unauthorized data access or manipulation.

3. Related Work

SQL injection (SQLi) remains a significant threat to the security of web applications, prompting the development of various detection and testing methodologies [21]. In 2019, Zhang et al. proposed ART4SQLi, an adaptive random testing method based on SQLMAP that prioritizes test cases to efficiently identify SQLi vulnerabilities, reducing the number of attempts needed by over 26% [9]. Unlike static order test prioritization, which calculates the distance between payloads, our tool employs a dynamic adjustment mechanism, enhancing detection efficiency.
Furthermore, most research needs to address the test prioritization of SQL injection. Al Wahaibi et al. introduced SQIRL in 2023, which utilizes deep reinforcement learning with grey-box feedback to intelligently fuzz input fields, generating diverse payloads, discovering more vulnerabilities with fewer attempts, and achieving zero false positives [22]. However, this method does not include test prioritization. Similarly, Erdődi et al. (2021) simulated SQLi attacks using Q-learning agents within a reinforcement learning framework, modeling SQLi as a security capture-the-flag challenge, enabling agents to learn generalizable attack strategies [25]. In the same year, Kasim developed an ensemble classification-based method that detects and classifies SQLi attacks as simple, unified, or lateral, utilizing features from the OWASP dataset to achieve high detection and classification accuracy [26]. Additionally, in 2023, Ravindran et al. created a Chrome extension to detect and prevent SQLi and XSS attacks by analyzing incoming data for suspicious patterns, thus enhancing web application security [2]. In 2024, Arasteh et al. presented a machine learning-based detection method using binary Gray-Wolf optimization for feature selection, which enhances detection efficiency by focusing on the most compelling features, achieving high accuracy and precision [23].
In test prioritization, Chen et al. investigated various techniques to optimize regression testing time in 2018. Based on test distribution analysis, their predictive test prioritization (PTP) method accurately predicts the optimal prioritization technique, significantly improving fault detection and reducing testing costs [27]. Haghighatkhah et al. studied the combination of diversity-based and history-based test prioritization (DBTP and HBTP) in continuous integration environments, finding that leveraging previous failure knowledge (HBTP) is highly effective. At the same time, DBTP is beneficial during early stages or when combined with HBTP [28]. Alptekin et al. introduced a method to prioritize security test executions based on web page similarities, hypothesizing that similar pages have similar vulnerabilities. This approach achieved high accuracy in predicting vulnerabilities, speeding up vulnerability assessments, and improving testing efficiency [29]. In 2023, Medeiros et al. proposed a clustering-based approach to categorize and prioritize code units based on security trustworthiness models. This method helps developers improve code security early in development by identifying code units prone to vulnerabilities, reducing potential vulnerabilities and associated costs [30].
These studies collectively advance the fields of SQL injection detection and test prioritization, providing robust methodologies to enhance web application security and optimize testing processes. However, to our knowledge, only ART4SQLi uses test prioritization to boost SQL injection testing.

4. Methodology and Implementation

The purpose of this section is to outline the methodology and steps undertaken in our research on testing prioritization for SQL injection vulnerabilities. The first section provides a succinct introduction to our penetration testing process and the associated framework. Subsequently, we detail the test prioritization algorithm in the following sections. Finally, the subsection introduces the profiling defense update function, elucidating its role in enhancing security measures.

4.1. TPSQLi Framework of Penetration Testing

Before initiating the detection module, various submodules targeting specific vulnerabilities are loaded. Figure 4 shows the workflow of our model.
In this penetration framework, the initial step involves the user entering the URL to be tested. Subsequently, a web crawler is employed to identify additional test targets, with the crawling depth determined by user-defined settings. The next phase involves launching the SQL Injection Attack Detection Model, which encompasses four main components: the Parameter Testing Panel, Test Prioritization Panel, Exploit Panel, and Report Panel. In the Parameter Testing Panel, parameters can be transmitted using the GET or POST method. For GET requests, parameters are extracted directly from the web pages. HTML code is analyzed for POST requests to locate “form” tags. The subsequent Test Prioritization Panel calculates an appropriate testing sequence for the Exploit Panel, utilizing a test prioritization algorithm detailed in the following chapter. Once the parameters are extracted, they are tested based on the prioritization determined in the previous panel. The test results are then fed into the test prioritization algorithm unit to refine future testing sequences. Finally, the Report Panel displays the results of the detection process, provides relevant information, and offers recommended solutions.

4.2. Parameter Testing Panel

The Parameter Testing Panel involves extracting parameters for analysis using GET and POST, as referenced in [31]. The GET method extracts parameters following the “?” in the URL, while the POST method identifies parameters within form input tags for testing. This differentiation allows for a comprehensive analysis of all possible entry points for SQL injection vulnerabilities.

4.3. Test Prioritization Panel

Upon acquiring the test parameters, the Test Prioritization Panel employs a prioritization algorithm to determine the sequence of tests. The unit of the Test Prioritization Algorithm is shown in Figure 5. The calculated prioritization dictates the order in which tests are conducted. During the injection phase, the system dynamically adjusts the test prioritization using the profiling defense update function. Post-injection, feedback in feedback.json file is utilized to update the weight scores, ensuring the algorithm adapts to the testing environment. This iterative process refines the prioritization for subsequent tests, continuously optimizing the testing strategy.

4.4. Test Prioritization Algorithm

This subsection presents the test prioritization algorithm for SQL injection vulnerability detection in web applications.

4.4.1. Pseudocode

Algorithm 1 outlines the test prioritization algorithm for SQL injection vulnerability detection in web applications. The algorithm takes as input an n -dimensional Strength–Weakness (SW) vector, where n represents different techniques, and a payload for each technique.
The process of our algorithm is detailed below:
  • Initialization (Set Initial Variables):
    • Strength–Weakness Vector (‘SW_Vector’): Initialize a fundamental score for each technique using a function discussed in the subsequent subsection. This vector quantifies each SQL injection technique’s relative strengths or weaknesses based on historical data or predefined metrics.
    • Exploit Time (‘exploit_time’): Initialize this variable to record the fastest exploit time for each technique. For instance, if technique A exploits a vulnerability in 2 s, while technique A takes 5 s, this would influence the prioritization of these techniques.
    • Is Exploit (‘is_exploit’): Initialize this boolean variable to record the success status of each technique. If a technique fails, it will be marked as False, affecting its priority in future tests.
  • Execution Loop:
    • Payload Selection: For each payload that failed in the previous round (higher risk payloads), determine whether it can be successfully exploited. If not, reduce its risk and select the payload with the highest fundamental score for testing.
    • Execution: Start the execution loop by recording the start time to establish a baseline duration. Execute the selected payload, performing the specific task assigned by the system. After the payload execution, record the end time to calculate the total execution time, which is critical for performance analysis and optimization.
    • Outcome Evaluation: Determine whether the exploit was successful. There are two possible outcomes:
      • Failure Case:
      • If ‘is_exploit’ for the technique is False, add the execution time and subtract one point from the fundamental score.
      • If ‘is_exploit’ is True, only subtract one point from the fundamental score without adding the execution time.
      • Success Case:
      • Set the payload’s risk to high.
      • If ‘is_exploit’ for the technique is False, add the execution time and set ‘is_exploit’ to True.
  • Iteration and Update:
    • Iterate: Continue testing the next payload until all payloads have been executed.
    • Update: After executing all payloads, update the SW-vector based on the recorded exploit time and the ‘is_exploit’ status.
Our proposed algorithm ensures that the payloads with higher risks are prioritized and the execution times are minimized by continuously updating and adapting the strength–weakness vector based on the performance of each technique. For an example of the algorithm in action, please refer to Section 4.4.4.
Algorithm 1. Test Prioritization Algorithm
Input:
   Strength–Weakness Vector SW_vector[0, …, n − 1],
   Payloads payloads = {0 : […], 1 : […], …, n − 1 : […]}
Output:
   Updated Strength–Weakness Vector SW_vector[0, …, n − 1]
1.Initialize Basic Scores BS[0, …, n − 1] based on SW_vector
2.Initialize exploit_time[0, …, n − 1] to zero
3.Initialize is_exploit[0, …, n − 1] to False
4.while there exists an unused payload with risk level 3 do
5.  Select payload p with the maximum BS[t] where risk(p) = 3 and t ∈ [0, …, n − 1]
6.  if p fails to exploit then
7.    Set risk(p) ← 1
8.   end if
9.end while
10.while there are unused payloads remaining do
11.  Select payload p with the maximum BS[t] where t ∈ [0, …, n − 1]
12.  Record current time as start_time
13.  Execute payload p
14.  Record current time as end_time
15.  if p fails to exploit then
16.    if is_exploit[t] = False then
17.      exploit_time[t] ← exploit_time[t] + (end_time − start_time)
18.      BS[t] ← BS[t] − 1
19.     else
20.      Set risk(p) ← 3
21.      if is_exploit[t] = False then
22.        exploit_time[t] ← exploit_time[t] + (end_time − start_time)
23.        Set is_exploit[t] ← True
24.       end if
25.     end if
26.   end if
27.end while
28.Update SW_vector based on exploit_time[0, …, n − 1] and is_exploit[0, …, n − 1]
Return Strength–Weakness Vector SW_vector[0, …, n − 1]

4.4.2. Mathematical Formulation of Test Prioritization Algorithm

In this subsection, we introduce the algorithm for test prioritization of various SQL injection vulnerability detection techniques. We define two sets: one representing all current detection techniques and the other representing the set of test targets. As technology evolves, new techniques may emerge. We assume there are currently n detection techniques and m test targets for this discussion.
For each test target, we conduct n attacks and record the duration of these attacks, resulting in n × m different results, which are then used to calculate the weight scores. We establish three rules to construct this test prioritization model:
  • Rule 1: If a technique fails to exploit a vulnerability successfully, it receives zero points in the weight score calculation.
  • Rule 2: The total score for the same test target is n, with faster exploits earning higher scores.
  • Rule 3: The score calculation considers the difference in exploit time using the reciprocal ratio of these times.
Moreover, we use the following mathematical formulas to express the model:
  • Techniques: n.
  • Successful exploits: a .
  • Failed exploits:   n a .
  • Success exploit time:   S 1 , S 2 ,   ,   S a .
  • Fail exploit time:   F 1 , F 2 ,   ,   F a .
  • Weights (Both W F i ,   W S i ,   W i set to zero in the initial process):
W F i = 0 ,   i = 1 , 2 , , n a
W S i = 1 S i · n ·   1 k = 1 a 1 S k , i = 1 , 2 , , a
W i = m a x ( W S i ,   W F i ) ,   i = 1 , 2 , , a
For a deeper understanding of our algorithm, please refer to Section 4.4.4 for a simple example.

4.4.3. Profiling Defense-Update Function

In addition to adjusting the order of the basic technique through the previously mentioned algorithm, we have incorporated an update mechanism within the dynamic segment. This mechanism aims to facilitate the dynamic adjustment of test prioritization. Specifically, after each technique is employed with an injected payload, if no vulnerabilities are detected, the priority of this technique will be progressively reduced in subsequent tests. Concurrently, we also prioritize the implementation of high-weighted techniques.
The precise methodology involves leveraging the fundamental score calculated by the preceding algorithm. When a test is conducted, and no weaknesses are identified, the technique score is decremented. This decrement continues until the score falls below that of other techniques or until the technique is deemed ineffective. This adaptive scoring and prioritization process ensures that the focus gradually shifts towards techniques more likely to uncover vulnerabilities, thereby optimizing the efficiency and effectiveness of the testing procedure.

4.4.4. Simple Example for Test Prioritization Algorithm

To better understand our algorithm, consider a simplified scenario involving three SQL injection techniques: Technique A1, Technique A2, and Technique A3. If, in the first round, Technique A1 has an exploit time of 2 s, Technique A2 fails, and Technique A3 has an exploit time of 4 s, the weight calculations would be as follows:
  • Technique A1: W i = 2
  • Technique A2: W i = 0
  • Technique A3: W i = 1
Given these weights, Technique A1 has the highest priority. Therefore, our method will adjust the profiling defense-update function to prioritize testing Technique A1 first, followed by Technique A3, and finally Technique A2.

4.5. Exploit Panel

As shown in Figure 6, the Exploit Panel initiates by calculating a fundamental score using a strength–weakness vector. Two variables are initialized: ‘exploit_time’ to record the fastest exploit time for each technique and ‘is_exploit’ to track the success of each technique. The panel first re-tests previously failed payloads, adjusting their risk levels dynamically based on success or failure. Successful exploits prompt the setting of the payload’s risk to high and update the technique’s status.
For failed payloads, the system subtracts points from their fundamental score and re-tests the next payload. If a payload is successfully exploited, its execution time is recorded, and the strength–weakness vector is updated accordingly. This process continues until all payloads are tested, ensuring comprehensive coverage of potential vulnerabilities.

4.6. Report Panel

Upon completing all tests, the system generates a comprehensive report, concluding the penetration testing process. The described implementation ensures a systematic and thorough approach to detecting SQL injection vulnerabilities in web applications.

5. Evaluation

The experimental setup included a Windows 10 machine from ASUS, model P2520LA, which was produced in China. The machine equipped with 12 GB of RAM and an Intel(R) Core(TM) i5-5200U CPU, operating at 2.2 GHz with four cores. The penetration testing framework, as outlined in Section 4, was implemented using Python 3.9.7. Our study focused on two specific targets, DVWA SQL-blind and DVWA SQL, along with eight real-world cases, including login pages, blogs, business websites, and eCommerce sites. Our framework was first evaluated on an open-source software project, with the specific test targets depicted in Table 2.

5.1. Collecting Time Data from Various Techniques

This flowchart in Figure 7 is designed to collect time data for various SQL injection detection techniques. The process begins by inputting a URL as the test target and testing each technique over multiple rounds ( n rounds). The technique type is recorded at the outset, followed by the start time for each technique test. The exploit is then executed, and upon completion, the end time is recorded regardless of the test’s success. The log file is subsequently deleted to ensure it does not affect subsequent tests. This procedure is repeated for all techniques, ensuring a comprehensive time data collection for each method.

5.2. Weights for Test

We evaluated TPSQLi on two open-source projects and eight websites with publicly disclosed SQL injection vulnerabilities submitted to HITCON ZeroDay [32]. The types and themes of these test websites are detailed in Table 2.
We performed five rounds of testing, recording the results and calculating the weights using Equations (1) and (2). The compiled scores are presented in Table 3. These scores informed the development of new testing priorities, transitioning from the existing BEUSTQ to a new order based on the calculated weights. Detailed timing information for each round is available in Appendix A.

5.3. Comparing with ART4SQLi

To evaluate the effectiveness of our proposed test prioritization approach, we conducted a comparative analysis against ART4SQLi, a widely used tool for SQL injection detection.

5.3.1. Coverage-Based Visual Comparison

We plotted the data obtained from both the original test prioritization and our new test prioritization approach on a single chart for visual comparison. In each chart in Figure 8, the horizontal axis represents time, while the vertical axis denotes the number of detected vulnerabilities. The blue area illustrates the results from the original test order, whereas the yellow area represents the outcomes generated by our TPSQLi framework. Our research demonstrates that the test priorities determined by the TPSQLi framework consistently outperform or match the original test order. In scenarios where the test priority differs from the original order, our approach achieves faster and more remarkable coverage improvement, leading to earlier attainment of 100% coverage.

5.3.2. Statistical Analysis of Test Execution Time

The maximum time T m a x , minimum time T m i n , average time T m e a n , and standard deviation of the time for exposing the last vulnerabilities T s t d of both the ART4SQLi and the proposed TPSQLi for the ten test targets are presented in Table 4.
The last column in Table 4, labeled Z , displays the values obtained from the statistical Z - t e s t , calculated using the mean ( T m e a n ) and standard deviation ( T s t d ) , specified in Equation (4). This column is intended to determine whether there is a statistically significant difference between the results of the two approaches (ART4SQLi and TPSQLi) across the ten test cases, with a confidence level of 95%, i.e., Z 0.95 = 1.645 .
Z = T m e a n ,     T P S Q L i T m e a n , A R T 4 S Q L i T s t d , T P S Q L i 2 + T s t d , A R T 4 S Q L i 2
As shown in Table 4, TPSQLi outperformed ART4SQLi across all metrics, with the exception of T m e a n for R4. Notably, the test prioritization did not alter the order for R4 and DVWA (SQL), resulting in the same order as observed in ART4SQLi. The Z values, except for those corresponding to DVWA (SQL) and R4, exceeded the critical value Z 0.95 = 1.645 , as indicated in the rightmost column. This suggests that the Z - t e s t confirms a significant reduction in execution time between ART4SQLi and TPSQLi for the 10 test cases.

5.3.3. Discussion of Effectiveness

Like ART4SQLi [9], we discuss the effectiveness of our test prioritization process and compare it with ART4SQLi. In TPSQLi, when a payload successfully exposes a vulnerability in the web application, it is considered valid and treated as a true positive. If the payload detects the final vulnerability, TPSQLi halts, marking the payload as valid. All payloads tested before this last successful one, which did not lead to exploiting a vulnerability, are considered false positives (they are flagged as suspicious but ultimately invalid).
If TPSQLi stops after the p t h payload (where p 1 and the payload is valid), and the number of the valid payloads before p t h is v , the p v 1 payloads are categorized as false positives. In contrast, the v + 1 payloads are considered as true positive. Since TPSQLi terminates once the last valid payload is detected, neither true nor false negatives are accounted for in this analysis.
We designed a new metric, the False Positive Measure ( F P M ), to systematically compare the false positives. This metric is defined as:
F P M = T o t a l   E x e c u t e d   P a y l o a d s F a l s e   P o s i t i v e
A higher F P M indicates a more effective process, implying fewer false positives than the number of executed payloads. To further quantify the improvement of TPSQLi over ART4SQLi, we also introduce an Improved Rate ( I R ), calculated as:
I R = F P M T P S Q L i F P M A R T 4 S Q L i F P M A R T 4 S Q L i × 100 %
The I R expresses the percentage improvement offered by TPSQLi over ART4SQLi. As shown in Table 5, TPSQLi delivers significant enhancements, particularly for test targets R1 and R3, with I R values of 19.95% and 16.62%, respectively. On average, TPSQLi demonstrates a 4.65% overall improvement, indicating consistent efficiency gains across different web applications. Notably, no improvement was observed on test targets DVWA(SQL) and R4, as the original order in both cases was optimal, resulting in identical F P M values for both TPSQLi and ART4SQLi.
This analysis highlights that our test prioritization method effectively reduces false positives and improves the accuracy of SQL injection detection. The increase in F P M values and positive IR percentages confirm that TPSQLi is superior to ART4SQLi, particularly in handling test execution time and precision.

6. Conclusions

In this research, we propose a comprehensive framework for regression testing of SQLi, focused on optimizing test prioritization through the design of weight functions and evaluation methods. The proposed module for calculating test prioritization is flexible, accommodating various technologies and targets, and includes dynamic adjustment capabilities. The weight calculation method accounts for the time differential in the exposure of weaknesses and considers the probability of successful exploitation by different technologies. Our findings demonstrate that the time to expose the first weakness is significantly reduced, and the overall exposure time is faster than the original test order. TPSQLi effectively accelerates the testing process, as evidenced by statistical Z - t e s t calculations confirming significant differences compared to the ART4SQLi. We also discuss the false positive to check the effectiveness of TPSQLi and compare it with ART4SQLi. Moreover, future work will explore machine learning methods, such as large language models for feature extraction and reinforcement learning for dynamically adjusting test prioritization, to enhance test prioritization further, aiming to improve the efficiency of SQL Injection testing.

Author Contributions

Conceptualization, F.W., G.-Y.Y. and Y.-Z.G.; data curation, G.-Y.Y.; formal analysis, G.-Y.Y.; investigation, Y.-Z.G., G.-Y.Y. and F.W.; methodology, G.-Y.Y. and Y.-Z.G.; implementation, G.-Y.Y., Y.-Z.G. and P.-H.H.; experiment, G.-Y.Y. and Y.-Z.G.; visualization, G.-Y.Y.; writing—original draft preparation, G.-Y.Y.; writing—review and editing, G.-Y.Y., Y.-W.T., K.-H.Y., F.W. and W.-L.W., supervision, F.W., G.-Y.Y. and K.-H.Y.; project administration, F.W. and K.-H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by the National Science and Technology Council (NSTC), Taiwan, ROC, under the projects MOST 110-2221-E-002-069-MY3, NSTC 111-2221-E-A49-202-MY3, NSTC 112-2634-F-011-002-MBK and NSTC 113-2634-F-011-002-MBK. We also received partial support from the 2024 CITI Visible Project: Questionnaire-based Technology for App Layout Evaluation, Academia Sinica, Taiwan, ROC.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.

Acknowledgments

We thank the anonymous reviewers for their valuable comments, which make this manuscript better. We also thank Jui-Ning Chen from Academia Sinica, Taiwan, for her invaluable comments. We appreciate the Speech AI Research Center of National Yang Ming Chiao Tung University for providing the necessary computational resources. Additionally, we utilized GPT-4 to assist with wording, formatting, and stylistic improvements throughout this research.

Conflicts of Interest

Author You-Zong Gu is employed by CyberLink Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

The unit of time data is seconds.
Table A1. Time data of various techniques (DVWA SQL-blind).
Table A1. Time data of various techniques (DVWA SQL-blind).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
14.4223.0822.09119.07195.6221.34
24.234.0122.11253.69296.7621.39
34.193.9022.14215.1861.2921.29
44.9423.0722.97100.12118.8022.13
523.3423.003.07195.9299.642.186
Average8.2315.4014.26176.80154.4217.67
Score5.46000.250.290
Table A2. Time data of various techniques (DVWA SQL).
Table A2. Time data of various techniques (DVWA SQL).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
18.4443.413.34272.56541.8021.60
28.3021.5922.51215.04464.642.36
38.5522.053.46196.03783.5821.57
426.9041.053.47311.27387.1021.54
58.4723.2422.57522.80349.7921.56
Average12.1330.2711.07303.54505.3817.73
Score8.4443.413.34272.56541.8021.60
Table A3. Time data of various techniques (R1).
Table A3. Time data of various techniques (R1).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
13.844.1322.434.8529.782.25
23.884.1821.675.1630.002.14
33.944.3421.145.1529.692.21
44.044.5421.255.9830.632.19
54.074.2821.925.7932.422.15
Average3.954.2921.685.3930.512.19
Score000060
Table A4. Time data of various techniques (R2).
Table A4. Time data of various techniques (R2).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
125.1119.5323.8220.8552.5313.94
225.9720.2222.0421.9153.1113.2
324.420.1322.7621.1952.7414.06
426.2120.8822.4923.8153.0214.35
528.8421.3827.4624.4155.5213.90
Average26.1020.4323.7122.4453.3913.90
Score2.3202.5501.130
Table A5. Time data of various techniques (R3).
Table A5. Time data of various techniques (R3).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
17.718.4827.439.0734.724.05
27.628.8928.929.1134.784.42
37.749.3329.879.935.693.93
47.2310.1131.638.636.544.16
57.468.4530.029.0635.094.04
Average7.559.0529.579.1535.364.12
Score000060
Table A6. Time data of various techniques (R4).
Table A6. Time data of various techniques (R4).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
122.6317.2127.4419.7420.9611.72
220.3217.726.7418.7920.5412.04
321.0917.8828.7819.9121.212.19
420.116.5927.0619.0821.2910.91
520.8916.2627.0819.8120.812.21
Average21.0017.1327.4219.4720.9611.81
Score600000
Table A7. Time data of various techniques (R5).
Table A7. Time data of various techniques (R5).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
165.5917.5978.19202.4942.815.22
261.4320.4481.52201.8744.995.56
363.9117.7287.58199.2935.915.61
458.7217.5477.35197.9438.855.78
562.7827.9592.36212.6846.6612.93
Average62.4920.2583.40202.8541.857.02
Score4.59001.4100
Table A8. Time data of various techniques (R6).
Table A8. Time data of various techniques (R6).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
16.6318.0531.010.6855.293.05
26.4120.0232.0712.2657.863.04
36.3318.031.1510.4655.223.4
46.8619.1630.9611.2357.393.18
56.4720.0931.8311.2255.863.16
Average6.5419.0631.4011.1756.323.17
Score3.601.230.7500.420
Table A9. Time data of various techniques (R7).
Table A9. Time data of various techniques (R7).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
144.5834.0747.8436.67157.6521.53
243.036.1945.2537.28158.2221.55
342.3733.5748.0738.94166.821.48
445.0735.8248.1541.16162.7822.65
545.2635.5545.4938.41159.2115.5
Average44.0535.0446.9638.49160.9320.54
Score2.7102.5400.740
Table A10. Time data of various techniques (R8).
Table A10. Time data of various techniques (R8).
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
14.9218.35.9517.5532.112.62
24.6919.095.9817.4931.72.73
34.5418.386.2417.1232.12.32
44.6718.536.0117.1931.782.37
54.7119.165.6617.6532.092.43
Average4.7018.695.9717.4031.962.49
Score2.740.692.1600.400

References

  1. Sharma, P.; Johari, R.; Sarma, S. Integrated approach to prevent SQL injection attack and reflected cross site scripting attack. Int. J. Syst. Assur. Eng. Manag. 2012, 3, 343–351. [Google Scholar] [CrossRef]
  2. Ravindran, R.; Abhishek, S.; Anjali, T.; Shenoi, A. Fortifying Web Applications: Advanced XSS and SQLi Payload Constructor for Enhanced Security. In Proceedings of the International Conference on Information and Communication Technology for Competitive Strategies, Jaipur, India, 8–9 December 2023; pp. 421–429. [Google Scholar]
  3. Odion, T.O.; Ebo, I.O.; Imam, R.M.; Ahmed, A.I.; Musa, U.N. VulScan: A Web-Based Vulnerability Multi-Scanner for Web Application. In Proceedings of the 2023 International Conference on Science, Engineering and Business for Sustainable Development Goals (SEB-SDG), Kwara, Nigeria, 5–7 April 2023; pp. 1–7. [Google Scholar]
  4. OWASP Top Ten. Available online: https://owasp.org/www-project-top-ten/ (accessed on 11 July 2024).
  5. Shehu, B.; Xhuvani, A. A literature review and comparative analyses on sql injection: Vulnerabilities, attacks and their prevention and detection techniques. Int. J. Comput. Sci. Issues (IJCSI) 2014, 11, 28. [Google Scholar]
  6. Wang, F.; Wu, J.-H.; Huang, C.-H.; Chang, K.-H. Evolving a test oracle in black-box testing. In Proceedings of the Fundamental Approaches to Software Engineering: 14th International Conference, FASE 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Saarbrücken, Germany, 26 March–3 April 2011; pp. 310–325. [Google Scholar]
  7. OWASP ZAP. Available online: https://www.zaproxy.org/ (accessed on 15 July 2024).
  8. Sqlmap. Available online: https://sqlmap.org (accessed on 28 June 2024).
  9. Zhang, L.; Zhang, D.; Wang, C.; Zhao, J.; Zhang, Z. ART4SQLi: The ART of SQL injection vulnerability discovery. IEEE Trans. Reliab. 2019, 68, 1470–1489. [Google Scholar] [CrossRef]
  10. Hailpern, B.; Santhanam, P. Software debugging, testing, and verification. IBM Syst. J. 2002, 41, 4–12. [Google Scholar] [CrossRef]
  11. Spillner, A.; Linz, T. Software Testing Foundations: A Study Guide for the Certified Tester Exam-Foundation Level-ISTQB® Compliant; Dpunkt. Verlag: Heidelberg/Wieblingen, Germany, 2021. [Google Scholar]
  12. Yoo, S.; Harman, M. Regression testing minimization, selection and prioritization: A survey. Softw. Test. Verif. Reliab. 2012, 22, 67–120. [Google Scholar] [CrossRef]
  13. Huang, G.-D.; Wang, F. Automatic test case generation with region-related coverage annotations for real-time systems. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis, Taipei, Taiwan, 4–7 October 2005; pp. 144–158. [Google Scholar]
  14. Strandberg, P.E.; Sundmark, D.; Afzal, W.; Ostrand, T.J.; Weyuker, E.J. Experience report: Automated system level regression test prioritization using multiple factors. In Proceedings of the 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE), Ottawa, ON, Canada, 23–27 October 2016; pp. 12–23. [Google Scholar]
  15. Bajaj, A.; Sangwan, O.P. A systematic literature review of test case prioritization using genetic algorithms. IEEE Access 2019, 7, 126355–126375. [Google Scholar] [CrossRef]
  16. Elbaum, S.; Rothermel, G.; Penix, J. Techniques for improving regression testing in continuous integration development environments. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, Hong Kong, China, 16–21 November 2014; pp. 235–245. [Google Scholar]
  17. Strandberg, P.E.; Afzal, W.; Ostrand, T.J.; Weyuker, E.J.; Sundmark, D. Automated system-level regression test prioritization in a nutshell. IEEE Softw. 2017, 34, 30–37. [Google Scholar] [CrossRef]
  18. Wang, F.; Huang, G.-D. Test Plan Generation for Concurrent Real-Time Systems Based on Zone Coverage Analysis. In Proceedings of the International Workshop on Formal Approaches to Software Testing, Tokyo, Japan, 10–13 June 2008; pp. 234–249. [Google Scholar]
  19. Marchand-Melsom, A.; Nguyen Mai, D.B. Automatic repair of OWASP Top 10 security vulnerabilities: A survey. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, Seoul, Republic of Korea, 27 June–19 July 2020; pp. 23–30. [Google Scholar]
  20. Bobade, N.D.; Sinha, V.A.; Sherekar, S.S. A diligent survey of SQL injection attacks, detection and evaluation of mitigation techniques. In Proceedings of the 2024 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2024; pp. 1–5. [Google Scholar]
  21. Marashdeh, Z.; Suwais, K.; Alia, M. A survey on sql injection attack: Detection and challenges. In Proceedings of the 2021 International Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021; pp. 957–962. [Google Scholar]
  22. Al Wahaibi, S.; Foley, M.; Maffeis, S. {SQIRL}:{Grey-Box} Detection of {SQL} Injection Vulnerabilities Using Reinforcement Learning. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23), Anaheim, CA, USA, 9–11 August 2023; pp. 6097–6114. [Google Scholar]
  23. Arasteh, B.; Aghaei, B.; Farzad, B.; Arasteh, K.; Kiani, F.; Torkamanian-Afshar, M. Detecting SQL injection attacks by binary gray wolf optimizer and machine learning algorithms. Neural Comput. Appl. 2024, 36, 6771–6792. [Google Scholar] [CrossRef]
  24. Nasereddin, M.; ALKhamaiseh, A.; Qasaimeh, M.; Al-Qassas, R. A systematic review of detection and prevention techniques of SQL injection attacks. Inf. Secur. J. A Glob. Perspect. 2023, 32, 252–265. [Google Scholar] [CrossRef]
  25. Erdődi, L.; Sommervoll, Å.Å.; Zennaro, F.M. Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning agents. J. Inf. Secur. Appl. 2021, 61, 102903. [Google Scholar] [CrossRef]
  26. Kasim, Ö. An ensemble classification-based approach to detect attack level of SQL injections. J. Inf. Secur. Appl. 2021, 59, 102852. [Google Scholar] [CrossRef]
  27. Chen, J.; Lou, Y.; Zhang, L.; Zhou, J.; Wang, X.; Hao, D.; Zhang, L. Optimizing test prioritization via test distribution analysis. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Lake Buena Vista, FL, USA, 4–9 November 2018; pp. 656–667. [Google Scholar]
  28. Haghighatkhah, A.; Mäntylä, M.; Oivo, M.; Kuvaja, P. Test prioritization in continuous integration environments. J. Syst. Softw. 2018, 146, 80–98. [Google Scholar] [CrossRef]
  29. Alptekin, H.; Demir, S.; Şimşek, Ş.; Yilmaz, C. Towards prioritizing vulnerability testing. In Proceedings of the 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Macau, China, 11–14 December 2020; pp. 672–673. [Google Scholar]
  30. Medeiros, N.; Ivaki, N.; Costa, P.; Vieira, M. Trustworthiness models to categorize and prioritize code for security improvement. J. Syst. Softw. 2023, 198, 111621. [Google Scholar] [CrossRef]
  31. Khalid, A.; Yousif, M.M. Dynamic analysis tool for detecting SQL injection. Int. J. Comput. Sci. Inf. Secur. (IJCSIS) 2016, 14, 224–232. [Google Scholar]
  32. HITCON ZeroDay. Available online: https://zeroday.hitcon.org/ (accessed on 28 June 2024).
Figure 1. Pre-test: Different test prioritization leads to different time costs.
Figure 1. Pre-test: Different test prioritization leads to different time costs.
Applsci 14 08365 g001
Figure 2. Evidence in ZAP.
Figure 2. Evidence in ZAP.
Applsci 14 08365 g002
Figure 3. Evidence in SQLMAP.
Figure 3. Evidence in SQLMAP.
Applsci 14 08365 g003
Figure 4. SQL injection attack detection model.
Figure 4. SQL injection attack detection model.
Applsci 14 08365 g004
Figure 5. Our test prioritization algorithm unit.
Figure 5. Our test prioritization algorithm unit.
Applsci 14 08365 g005
Figure 6. Flowchart of Exploit panel.
Figure 6. Flowchart of Exploit panel.
Applsci 14 08365 g006
Figure 7. Flowchart of collecting time data using various techniques.
Figure 7. Flowchart of collecting time data using various techniques.
Applsci 14 08365 g007
Figure 8. Comparison between the original tool (ART4SQLi) and the optimized tool (TPSQLi) was conducted across various cases: (a) DVWA (SQL blind); (b) DVWA (SQL); (c) R1; (d) R2; (e) R3; (f) R4; (g) R5; (h) R6; (i) R7; (j) R8.
Figure 8. Comparison between the original tool (ART4SQLi) and the optimized tool (TPSQLi) was conducted across various cases: (a) DVWA (SQL blind); (b) DVWA (SQL); (c) R1; (d) R2; (e) R3; (f) R4; (g) R5; (h) R6; (i) R7; (j) R8.
Applsci 14 08365 g008aApplsci 14 08365 g008b
Table 1. Abbreviation and complete name mapping.
Table 1. Abbreviation and complete name mapping.
Complete NameAbbreviation
Boolean-based blind SQL injectionB
Union-based SQL injectionU
Stack-based SQL injectionS
Inline Queries SQL injectionQ
Error-based SQL injectionE
Time-based blind SQL injectionT
Table 2. TPSQLi test target.
Table 2. TPSQLi test target.
TypeTest TargetTopic
Open SourceDVWA (SQL-blind)Test Environment
DVWA (SQL)Test Environment
Real-world CaseR1Login page
R2Wiki/database
R3Blogs/news
R4Business website
R5Service provider
R6eCommerce website
R7Portfolio
R8Business website
Table 3. Weights of the test target.
Table 3. Weights of the test target.
Boolean-BasedError-
Based
Union-BasedStack-
Based
Time-
Based
Inline
Queries
DVWA (SQL-blind)5.46000.250.290
DVWA (SQL)2.340.942.570.090.060
R1000060
R22.3202.5501.130
R3000060
R4600000
R54.59000.430.980
R63.601.230.7500.420
R72.7102.5400.740
R82.740.692.1600.400
Table 4. Comparisons between ART4SQLi and TPSQLi.
Table 4. Comparisons between ART4SQLi and TPSQLi.
Test TargetMethod T m a x T m i n T m e a n T s t d Z
DVWA
(SQL-blind)
ART4SQLi [9]405.80349.90369.1120.371.68
TPSQLi359.9307.06331.2614.47
DVWA (SQL)ART4SQLi [9]913.55834.79862.3931.950.059
TPSQLi898.65836.08859.9725.52
R1ART4SQLi [9]68.4864.2665.821.6917.04
TPSQLi32.4229.2830.441.21
R2ART4SQLi [9]157.61141.22146.076.765.50
TPSQLi109.8299.45102.734.07
R3ART4SQLi [9]94.1187.4190.682.6520.66
TPSQLi35.6934.6735.150.46
R4ART4SQLi [9]22.6320.1021.000.990.04
TPSQLi22.4720.3221.060.85
R5ART4SQLi [9]395.77351.55368.9916.295.86
TPSQLi275.34259.71267.655.83
R6ART4SQLi [9]128.62121.16124.493.102.67
TPSQLi116.81109.65113.212.88
R7ART4SQLi [9]332.98319.94325.475.699.49
TPSQLi256.94244.99250.875.43
R8ART4SQLi [9]79.2778.1878.720.4432.16
TPSQLi61.9061.1561.630.30
Table 5. Comparison of TPSQLi and ART4SQLi Based on False Positive Measure ( F P M ) and Improved Rate ( I R ) Across Various Test Targets.
Table 5. Comparison of TPSQLi and ART4SQLi Based on False Positive Measure ( F P M ) and Improved Rate ( I R ) Across Various Test Targets.
Test TargetMethodFPMIRAverage IR
DVWA (SQL-blind)ART4SQLi [9]1.00283.37%4.65%
TPSQLi1.0366
DVWA (SQL)ART4SQLi [9]1.00570.00%
TPSQLi1.0057
R1ART4SQLi [9]1.000419.95%
TPSQLi1.2000
R2ART4SQLi [9]1.00182.17%
TPSQLi1.0236
R3ART4SQLi [9]1.000416.62%
TPSQLi1.1667
R4ART4SQLi [9]1.00040.00%
TPSQLi1.0004
R5ART4SQLi [9]1.00090.71%
TPSQLi1.0080
R6ART4SQLi [9]1.00311.09%
TPSQLi1.0140
R7ART4SQLi [9]1.00181.70%
TPSQLi1.0189
R8ART4SQLi [9]1.00300.92%
TPSQLi1.0123
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, G.-Y.; Wang, F.; Gu, Y.-Z.; Teng, Y.-W.; Yeh, K.-H.; Ho, P.-H.; Wen, W.-L. TPSQLi: Test Prioritization for SQL Injection Vulnerability Detection in Web Applications. Appl. Sci. 2024, 14, 8365. https://doi.org/10.3390/app14188365

AMA Style

Yang G-Y, Wang F, Gu Y-Z, Teng Y-W, Yeh K-H, Ho P-H, Wen W-L. TPSQLi: Test Prioritization for SQL Injection Vulnerability Detection in Web Applications. Applied Sciences. 2024; 14(18):8365. https://doi.org/10.3390/app14188365

Chicago/Turabian Style

Yang, Guan-Yan, Farn Wang, You-Zong Gu, Ya-Wen Teng, Kuo-Hui Yeh, Ping-Hsueh Ho, and Wei-Ling Wen. 2024. "TPSQLi: Test Prioritization for SQL Injection Vulnerability Detection in Web Applications" Applied Sciences 14, no. 18: 8365. https://doi.org/10.3390/app14188365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop