Enhancing Security in Airline Ticket Transactions: A Comparative Study of SVM and LightGBM
Abstract
1. Introduction
1.1. Research Objectives
1.2. Results Obtained
1.3. State of the Art
1.3.1. Payment Industry Overview
1.3.2. Technological Solutions in Fraud Detection and Prevention
1.3.3. Types of Payment Fraud
1.3.4. Current Challenges and Future Directions
1.3.5. Context and Justification
1.3.6. Problem Statement
2. Methodology
2.1. Research Design
2.2. Data Collection
2.3. Feature Engineering
2.4. Model Selection
2.5. Model Training and Evaluation
2.6. Validation and Testing
2.7. Implementation and Simulation
3. Materials and Methods
3.1. Feature Engineering Strategy
3.1.1. Data Sources and Types of Features
- 1.
- Card-related features:
- Card network: the name of the company managing the card scheme (e.g., Visa, MasterCard).
- Card number (primary account number or PAN): a unique identifier for the card, usually hashed for security.
- Card type: whether the card is a credit or debit card.
- Expiration date: the validity period of the card.
- CVV code: the card verification value used for security purposes.
- 2.
- User-related features:
- Username: the name of the user making the purchase.
- Age: the age of the user.
- Date of birth: the user’s birthdate.
- Gender: the gender of the user.
- Residential address: the user’s home address.
- City and postal code: the city and postal code of the user’s residence.
- Annual income: the yearly income of the user.
- Credit debt: the amount of debt the user has in credit.
- Account opening date: the date the user’s bank account was opened.
- First transaction date: the date of the user’s first transaction.
- Total transactions: the total number of transactions made by the user.
- Email address: the user’s email used for communication.
- Travel-related metrics: metrics such as the total duration of all flights, total loyalty points accumulated, number of trips, and membership level with the airline.
- 3.
- Transaction-related features:
- Merchant name: the name of the merchant or airline.
- Error code: any error codes generated during the transaction process.
- Fraud label: a label indicating whether the transaction is fraudulent (used as the target variable).
- Transaction date: the date the transaction was made.
- Time on website: the amount of time the user spent on the website during the purchase.
- Contact email: the email provided for contact purposes during the transaction.
- Email seen count: the number of times the contact email has been seen previously.
- Contact phone number: the phone number provided for contact purposes.
- Payment service provider (PSP) issuer and acquirer: the PSPs involved in the transaction, including their countries.
- SMS confirmation use: whether SMS confirmation was used for the transaction.
- SMS resend count: the number of times the SMS confirmation was resent.
- Time to complete SMS confirmation: the time taken to complete SMS confirmation.
- Digital wallet use: whether a digital wallet was used in the transaction.
- Number of items in purchase: the total number of items bought in the transaction.
- Customer purchase history: metrics such as the number of purchases from the same merchant and the average purchase value.
- First purchase date: the date of the user’s first purchase from the merchant.
- IP address and VPN use: the IP address of the user and whether a VPN was used.
- Flight details: including origin and destination cities, number of layovers, total layover time, and total travel duration.
- Loyalty points used: the number of loyalty points used for the purchase.
- Seat quality level: the quality level of the seats in the flight.
3.1.2. Feature Generation Process
3.1.3. Iterative Feature Engineering
3.1.4. Derived Features
- Retirement age: calculated based on the user’s age and standard retirement age in their country.
- Number of vowels in name: used as a simple textual feature.
- Weekly purchase ratio: the average number of purchases per week.
- Weeks since first purchase: the number of weeks since the user’s first recorded purchase.
- Average purchase price: the average price of purchases made by the user.
- Average number of items per purchase: the average number of items bought in each transaction.
- First week purchase count: the number of purchases made during the user’s first week of transactions.
- Denied transactions in first week: the number of transactions denied during the user’s first week.
- Email vowel count: the total number of vowels in the user’s email address.
- Email number count: the total number of digits in the user’s email address.
- Use of disposable email domain: whether the user’s email domain is known to be disposable.
3.1.5. Final Feature Set
3.2. Model Selection and Implementation
3.2.1. Support Vector Machines (SVMs)
Theoretical Foundation
- Statistical learning theory: SVMs are based on the principles of statistical learning theory, which provides a framework for understanding the problem of acquiring knowledge, making predictions, and making decisions based on data. The core idea is to find a function that minimizes the expected error on new data by selecting a hypothesis space that accurately represents the underlying function in the target space [20] (see Figure 1).
- Margin maximization: The main objective of SVMs is to find the hyperplane that maximizes the margin between different classes. The margin is defined as the distance between the hyperplane and the closest points from each class, known as support vectors [21]. By maximizing the margin, SVMs achieve better generalization on unseen data.
- Mathematical formulation: In its simplest form, a linear SVM attempts to find the hyperplane defined by the equation:
- Dual problem and kernel trick: For non-linear problems, SVMs use a technique known as the kernel trick to map the input data into a higher dimensional space where a linear hyperplane can be used to separate the classes [22]. The kernel function implicitly performs this mapping without the need to compute the coordinates in the high-dimensional space explicitly [23].
- Polynomial kernel:
- Radial basis function (RBF) kernel:
- Sigmoid kernel:
- 5.
- Soft margin and regularization: In real-world applications, data is often not perfectly separable. SVMs address this issue by introducing slack variables that allow some points to be within the margin or even misclassified. The objective then becomes:
Implementation and Application
- Training the SVM model: The training process involves solving the quadratic optimization problem to find the optimal hyperplane. This is typically done using techniques such as sequential minimal optimization (SMO) or other gradient-based methods. Once trained, the SVM model can classify new samples by determining on which side of the hyperplane they fall [24].
- Application in fraud detection: In the context of fraud detection, SVMs are used to distinguish between legitimate and fraudulent transactions [25]. The high-dimensional feature space and the ability to handle non-linear relationships make SVMs particularly suitable for this task. The model is trained on historical transaction data, where features related to user behavior, transaction details, and card information are used to build the classifier.
- Advantages and challenges: SVMs offer several advantages, including robustness to overfitting, effectiveness in high-dimensional spaces, and flexibility with different kernel functions [26]. However, they also come with challenges, such as high computational cost for large datasets and sensitivity to the choice of hyperparameters and kernel.
- Evaluation metrics: The performance of the SVM model is evaluated using metrics such as accuracy, precision, recall, and the area under the receiver operating characteristic curve (AUC-ROC). These metrics provide insights into the model’s ability to correctly identify fraudulent transactions while minimizing false positives and false negatives [27].
3.2.2. Light Gradient Boosting Machine (LightGBM)
Key Features of LightGBM
- Leaf-wise tree growth: LightGBM grows trees leaf-wise rather than level-wise, which is different from traditional gradient boosting methods. In leaf-wise growth, the algorithm chooses the leaf with the maximum delta loss to split, leading to a more complex tree structure that results in less loss compared to level-wise growth [30].
- Histogram-based algorithm: LightGBM uses a histogram-based algorithm to bucket continuous feature values into discrete bins, significantly reducing the computation cost and memory usage [31]. This approach speeds up the training process and makes the algorithm more efficient.
- Support for categorical features: LightGBM natively supports categorical features and can handle them without needing extensive preprocessing [32]. This feature allows the model to leverage categorical data effectively, which is particularly useful in fraud detection where categorical features are common.
- Efficient handling of large datasets: LightGBM is designed to be highly efficient with large datasets. It supports parallel and distributed learning, enabling it to scale and process massive amounts of data quickly [33].
- Regularization techniques: LightGBM includes multiple regularization techniques to prevent overfitting. These techniques help the model generalize better to unseen data, which is crucial in fraud detection where the data is often imbalanced [34].
Mathematical Foundation
- Gradient boosting framework: LightGBM is based on the gradient boosting framework, which builds models in a sequential manner. Each new model corrects the errors made by the previous models. The objective function of LightGBM can be formulated as:
- Leaf-wise growth strategy: The leaf-wise growth strategy aims to find the leaf with the maximum delta loss reduction and split it. This approach leads to deeper trees and better performance in terms of reducing loss:
- Histogram-based decision rules: LightGBM builds histograms for each feature and uses these histograms to find the optimal split points. This method reduces the computation cost and speeds up the training process. The histogram-based algorithm can be described as follows:
- Bucket continuous values: Continuous feature values are bucketed into discrete bins.
- Build histograms: Histograms are built for each feature based on the binned values.
- Find optimal split: The optimal split point is determined by evaluating the histograms.
Implementation and Application
- Training the LightGBM model: The training process of LightGBM involves constructing multiple decision trees sequentially. Each tree is trained to minimize the error of the previous trees using gradient descent. The hyperparameters, such as the number of leaves, learning rate, and regularization parameters, are tuned to optimize the model’s performance.
- Application in fraud detection: In the context of fraud detection, LightGBM is used to classify transactions as either legitimate or fraudulent. The model is trained on historical transaction data, utilizing features related to user behavior, transaction details, and card information. LightGBM’s ability to handle large datasets and complex feature interactions makes it particularly suitable for this task.
- Advantages and challenges: LightGBM offers several advantages, including faster training speed, higher efficiency, and better handling of large datasets compared to other gradient boosting algorithms. However, it also has challenges, such as sensitivity to hyperparameter tuning and potential overfitting if not properly regularized.
- Evaluation metrics: The performance of the LightGBM model is evaluated using metrics such as accuracy, precision, recall, and the area under the receiver operating characteristic curve (AUC-ROC). These metrics provide insights into the model’s ability to correctly identify fraudulent transactions while minimizing false positives and false negatives.
4. Discussion
- Comparison of models: This section provides a detailed comparison of the two machine learning models used in the research, support vector machines (SVMs) and the light gradient boosting machine (LightGBM). The comparison focuses on various performance metrics, including accuracy, precision, recall, and the area under the receiver operating characteristic curve (AUC-ROC). Additionally, we will discuss the computational efficiency and scalability of each model.
- Limitations and modifications: In this section, we address the limitations encountered during the research. These limitations may pertain to data quality, model performance, or implementation challenges. We will also discuss any modifications made to the models or methodologies to overcome these limitations and enhance their effectiveness.
- Relevance to real-world applications: Understanding the performance and limitations of the models is crucial for their application in real-world fraud detection systems. By analyzing the results and identifying areas for improvement, we can provide recommendations for future research and practical implementations. This discussion aims to bridge the gap between theoretical research and practical applications, ensuring that the models developed are both robust and effective in detecting fraudulent transactions.
4.1. Comparison of Models
4.1.1. Support Vector Machines
- Training phase: The linear model was trained with the dataset obtained through the template. A hyperparameter search was conducted to find the best configuration for the model to suit the use case.
- Generation of a new set of features: Given the results of the training process, a new dataset was generated to improve its quality.
4.1.2. Light Gradient Boosting Machine
4.2. Limitations and Modifications
- Limitations:
- Data quality and availability: The quality and availability of data significantly impact the performance of machine learning models. In this research, obtaining a coherent dataset with sufficient records for training posed a challenge. The use of synthetic data, although necessary, may not fully capture the complexities and variations of real-world transaction data.
- ○
- Impact: The reliance on synthetic data could lead to models that perform well in controlled environments but may struggle with the unpredictability and diversity of real-world data.
- Imbalanced data: Fraud detection datasets are typically highly imbalanced, with a small proportion of fraudulent transactions compared to legitimate ones. This imbalance can affect the model’s ability to learn effectively and may result in a higher rate of false positives or false negatives.
- ○
- Impact: The imbalance in the dataset may lead to biased models that are less sensitive to detecting fraud, potentially missing fraudulent transactions or misclassifying legitimate ones.
- Model complexity and interpretability: While complex models like LightGBM provide high accuracy and performance, they can be challenging to interpret. Understanding how the model makes decisions is crucial for gaining trust and ensuring compliance with regulatory requirements.
- ○
- Impact: The lack of interpretability can hinder the adoption of the model by stakeholders who require clear explanations of the decision-making process.
- Computational resources: Training and deploying machine learning models, especially for large datasets, require significant computational resources. This can be a constraint for small and medium-sized enterprises (SMEs) with limited access to high-performance computing infrastructure.
- ○
- Impact: Limited computational resources may restrict the ability of SMEs to fully leverage the benefits of advanced machine learning models for fraud detection.
- Overfitting: Overfitting occurs when a model learns the noise in the training data rather than the underlying patterns. This can lead to poor generalization to new, unseen data.
- ○
- Impact: Overfitting reduces the model’s effectiveness in real-world applications, where it needs to generalize well across diverse and unseen transaction data.
- Modifications:
- Enhancing data quality: To address the limitations related to data quality and availability, future research could focus on obtaining more diverse and representative datasets. Collaborations with financial institutions and merchants can provide access to real-world transaction data, enhancing the robustness of the model.
- ○
- Modification: Incorporate real-world transaction data and conduct extensive data cleaning and preprocessing to improve data quality.
- Addressing data imbalance: Techniques such as oversampling, undersampling, and synthetic data generation (e.g., SMOTE) can be employed to address data imbalance. Additionally, cost-sensitive learning approaches can be used to assign higher penalties to misclassified fraudulent transactions.
- ○
- Modification: Implement advanced techniques to balance the dataset and improve the model’s sensitivity to fraud detection.
- Improving model interpretability: To enhance interpretability, techniques such as SHAP (SHapley Additive exPlanations) values and LIME (local interpretable model-agnostic explanations) can be used. These methods provide insights into how the model makes decisions, making it easier to understand and trust the model’s predictions.
- ○
- Modification: Integrate interpretability techniques into the model to provide clear explanations for decision-making processes.
- Optimizing computational efficiency: Efforts can be made to optimize the model’s computational efficiency by exploring techniques such as model pruning, quantization, and hardware acceleration. This can reduce the computational burden and make the model more accessible to SMEs.
- ○
- Modification: Implement optimization techniques to reduce computational requirements and improve the model’s scalability.
- Regularization and cross-validation: Regularization techniques such as L1 and L2 regularization can help mitigate overfitting. Additionally, cross-validation methods can be employed to ensure the model generalizes well across different subsets of the data.
- ○
- Modification: Apply regularization techniques and robust cross-validation strategies to enhance the model’s generalization capabilities.
5. Conclusions
- Key findings:
- Superiority of LightGBM: Among the models evaluated, LightGBM outperformed SVM in several aspects, including accuracy, computational efficiency, and scalability. LightGBM’s ability to handle large datasets and complex feature interactions makes it a more practical choice for real-time fraud detection systems.
- Importance of feature engineering: The success of the predictive models heavily relied on the comprehensive feature engineering process. By carefully selecting and transforming features, the models were able to capture significant patterns and anomalies related to fraudulent transactions.
- Handling data imbalance: The research highlighted the challenges associated with imbalanced datasets in fraud detection. Techniques such as data balancing and cost-sensitive learning were essential in improving the model’s sensitivity to fraudulent transactions.
- Model interpretability: While complex models like LightGBM offer high performance, ensuring their interpretability remains crucial. Techniques such as SHAP values and LIME can provide transparency into the model’s decision-making process, fostering trust and compliance with regulatory standards.
- Implications:
- Practical application: The findings from this research can be directly applied to develop and implement robust fraud detection systems for the airline industry and other sectors prone to payment fraud. The models and methodologies presented can enhance the security and reliability of online transactions.
- Enhanced fraud prevention: By adopting advanced machine learning techniques, financial institutions and merchants can significantly reduce the incidence of fraudulent transactions. This not only protects revenue but also enhances customer trust and satisfaction.
- Future research directions: The research opens several avenues for future studies, including exploring more sophisticated models, integrating real-world transaction data, and improving the scalability and interpretability of fraud detection systems.
- Recommendations:
- Collaboration with industry: Future research should seek collaboration with financial institutions and merchants to access real-world transaction data. This will enhance the robustness and applicability of the models developed.
- Continuous improvement: Fraud detection systems must continuously evolve to keep pace with the ever-changing tactics of fraudsters. Regular updates and improvements to the models and feature sets are essential for maintaining their effectiveness.
- Focus on interpretability: Ensuring the interpretability of machine learning models is critical for gaining stakeholder trust and meeting regulatory requirements. Future research should prioritize developing models that are both highly accurate and transparent.
- Scalability and efficiency: Efforts should be made to optimize the computational efficiency and scalability of fraud detection systems. This includes exploring techniques like model pruning, hardware acceleration, and efficient data processing methods.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mastercard. Blog. Ecommerce Fraud Trends and Statistics Merchants Need to Know in 2024. Available online: https://b2b.mastercard.com/news-and-insights/blog/ecommerce-fraud-trends-and-statistics-merchants-need-to-know-in-2024/ (accessed on 15 July 2024).
- Vogels, E.A.; Rainie, L.; Anderson, J. Tech Is (Just) a Tool; Pew Research Center: Washington, DC, USA, 2020. [Google Scholar]
- McKinsey. The Future of the Payments Industry. Available online: https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-future-of-the-payments-industry-how-managing-risk-can-drive-growth (accessed on 15 July 2024).
- EY. How the Rise of Paytech is Reshaping the Payments Landscape. Available online: https://www.ey.com/en_gl/insights/payments/how-the-rise-of-paytech-is-reshaping-the-payments-landscape (accessed on 17 July 2024).
- ECB. Banco Central Europeo Stats. Available online: https://www.ecb.europa.eu/press/stats/paysec/html/ecb.pis2023~b28d791ed8.en.html (accessed on 1 July 2024).
- Swatch. SwatchPay. Available online: https://www.swatch.com/en-en/swatch-pay/how-it-works.html (accessed on 27 June 2024).
- ECB. Digital Euro. Available online: https://www.ecb.europa.eu/euro/digital_euro/progress/html/index.en.html (accessed on 27 June 2024).
- ECB. Pagos Instantáneos en Europa. Available online: https://www.europarl.europa.eu/news/en/press-room/20231031IPR08706/agreement-reached-on-more-accessible-instant-payments-in-euros (accessed on 17 July 2024).
- Europea Commission. PSD2. Available online: https://eur-lex.europa.eu/legal-content/ES/TXT/PDF/?uri=CELEX:32020R2011&from=DE (accessed on 5 July 2024).
- Europea Commission. PSD3. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52023PC0367&qid=1716633078125 (accessed on 27 June 2024).
- Europea Commission. Second Revision of Payment Servicesin the EU. Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/753199/EPRS_BRI(2024)753199_EN.pdf (accessed on 5 July 2024).
- Dionach. Payment Processing Vulnerabilities. Available online: https://www.dionach.com/payment-processing-vulnerabilities/#:~:text=An%20example%20of%20this%20is,and%20tamper%20with%20the%20price (accessed on 15 July 2024).
- Stripe. Emisores y Redes de Tarjetas—Stripe. Available online: https://stripe.com/es/resources/more/issuing-banks#:~:text=Los%20emisores%20son%20entidades%20financieras,necesario%20para%20cubrir%20el%20pago (accessed on 1 July 2024).
- Cybersource. Fraud Report 2023. Available online: https://www.cybersource.com/content/dam/documents/campaign/fraud-report/global-fraud-report-2023-en.pdf (accessed on 8 July 2024).
- Mastercard. Mastercard Targets Friendly Fraud. Available online: https://www.mastercard.com/news/press/2023/october/mastercard-targets-friendly-fraud-to-protect-small-businesses-and-merchants/ (accessed on 5 July 2024).
- Xie, Y.; Liu, G.; Yan, G.; Jiang, C.; Zhou, M.; Li, M. Learning Transactional Behavioral Representations for Credit Card Fraud Detection. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 5735–5748. [Google Scholar] [CrossRef]
- Xie, Y.; Liu, G.; Zhou, M.; Wei, L.; Zhu, H.; Zhou, R. A Spatial–Temporal Gated Network for Credit Card Fraud Detection by Learning Transactional Representations. IEEE Trans. Autom. Sci. Eng. 2024, 21, 6978–6991. [Google Scholar] [CrossRef]
- Vapnik, V. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995; ISBN 0-387-94559-8. [Google Scholar]
- Bousquet, O.; Boucheron, S.; Lugosi, G. Introduction to Statistical Learning Theory. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Evgeniou, T.; Pontil, M. Statistical Learning Theory: A Primer. Int. J. Comput. Vis. 2000, 38, 9–13. [Google Scholar] [CrossRef]
- Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
- Mitchell, T. Machine Learning. In Computer Science Series; McGraw-Hill: Columbus, OH, USA, 1997. [Google Scholar]
- Zxr.nju. What Is the Kernel Trick? Why Is it Important? Medium. 2023. Available online: https://medium.com/@zxr.nju/what-is-the-kernel-trick-why-is-it-important-98a98db0961d (accessed on 28 June 2024).
- Lewis, J. Tutorial on SVM; CGIT Lab, University of Southern California: Los Angeles, CA, USA, 2004. [Google Scholar]
- Burges, C. A tutorial on support vector machines for pattern recognition. In Data Mining and Knowledge Discovery; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998. [Google Scholar]
- Osuna, E.; Freund, R.; Girosi, F. Support Vector Machines: Training and Applications; Artificial Intelligence Laboratory MIT: Cambridge, MA, USA, 1997. [Google Scholar]
- Veropoulos, K.; Cristianini, N.; Campbell, C. The Application of Support Vector Machines to Medical Decision Support: A Case Study. Adv. Course Artif. Intell. 1999, 1–6. [Google Scholar]
- Stitson, M.O.; Weston, J.A. Implementational Issues of Support Vector Machines; Technical Report CSD-TR-96-18; Computational Intelligence Group; Royal Holloway; University of London: London, UK, 1996. [Google Scholar]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. Adv. Neural Inf. Process. Syst. 2017, 30, 52. [Google Scholar]
- Nakamura, S.; Jiwoong, W.; Huiqi, D.; Iwase, M. Light-GBM based signal correction method for surface myoelectropotential measured by multi-channel band-type EMG sensor. IFAC-PapersOnLine 2023, 56, 3558–3565. [Google Scholar] [CrossRef]
- Barrios Arce, J.I. Light GBM vs XGBoost: ¿Cuál es Mejor Algoritmo? Health Big Data. 2022. Available online: https://www.juanbarrios.com/light-gbm-vs-xgboost-cual-es-mejor-algoritmo/ (accessed on 29 June 2024).
- Hanif, M.F.; Naveed, M.S.; Metwaly, M.; Si, J.; Liu, X.; Mi, J. Advancing solar energy forecasting with modified ANN and light GBM learning algorithms. AIMS Energy 2024, 12, 350–386. [Google Scholar] [CrossRef]
- Amin, M.; Salami, B.; Zahid, M.; Iqbal, M.; Khan, K.; Abu-Arab, A.; Alabdullah, A.; Jalal, F. Investigating the Bond Strength of FRP Laminates with Concrete Using LIGHT GBM and SHAPASH Analysis. Polymers 2022, 14, 4717. [Google Scholar] [CrossRef] [PubMed]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T. LightGBM: A Highly Efficient Gradient Boosting. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
Iteration X | Iteration Y | |||
---|---|---|---|---|
0 | 1 | 0 | 1 | |
Precision | 0.78 | 0.10 | 0.92 | 0.13 |
Recall | 0.50 | 0.55 | 0.54 | 0.60 |
F1 Score | 0.68 | 0.22 | 0.68 | 0.22 |
Sample Size | 536,026 | 63,974 | 536,026 | 63,974 |
Iteration X | Iteration Y | |||
---|---|---|---|---|
0 | 1 | 0 | 1 | |
Precision | 0.89 | 0.20 | 0.94 | 0.14 |
Recall | 1 | 0 | 0.42 | 0.77 |
F1Score | 0.94 | 0 | 0.58 | 0.23 |
Sample Size | 536,026 | 63,974 | 536,026 | 63,974 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Arnaldo, C.G.; Jurado, R.D.-A.; Moreno, F.P.; Suárez, M.Z. Enhancing Security in Airline Ticket Transactions: A Comparative Study of SVM and LightGBM. Appl. Sci. 2025, 15, 9581. https://doi.org/10.3390/app15179581
Arnaldo CG, Jurado RD-A, Moreno FP, Suárez MZ. Enhancing Security in Airline Ticket Transactions: A Comparative Study of SVM and LightGBM. Applied Sciences. 2025; 15(17):9581. https://doi.org/10.3390/app15179581
Chicago/Turabian StyleArnaldo, César Gómez, Raquel Delgado-Aguilera Jurado, Francisco Pérez Moreno, and María Zamarreño Suárez. 2025. "Enhancing Security in Airline Ticket Transactions: A Comparative Study of SVM and LightGBM" Applied Sciences 15, no. 17: 9581. https://doi.org/10.3390/app15179581
APA StyleArnaldo, C. G., Jurado, R. D.-A., Moreno, F. P., & Suárez, M. Z. (2025). Enhancing Security in Airline Ticket Transactions: A Comparative Study of SVM and LightGBM. Applied Sciences, 15(17), 9581. https://doi.org/10.3390/app15179581