CODE: A Moving-Window-Based Framework for Detecting Concept Drift in Software Defect Prediction
Abstract
:1. Introduction
- (RQ1)
- How drift prone are the chronological-defect datasets?
- (RQ2)
- Does class rebalancing eliminate CD from chronological-defect datasets and impact the performance of CVDP?
2. Theoretical Background
2.1. Concept of Dataset Shift
2.2. Data-Drift Detection Methods
3. CODE: Concept-Drift Detection Framework
4. Methodology
4.1. Studied Datasets
4.2. Apply Class-Rebalancing Approaches
4.3. Construct Cross-Version Defect Models
4.4. Calculate Model Performance
4.5. Statistical Analysis of the Experiment
4.5.1. Statistical Comparison
4.5.2. Effect-Size Computation
4.6. Experimental Setup
5. Results
- The ROSE technique eliminates CD by balancing the datasets by up to 31%. Overall, the added benefit of class-rebalancing techniques is noticeable and reduces CD from the chronological-defect datasets.
- The experiment results allow us to guide practitioners to maintain the high prediction performance of CVDP models over time to eliminate CDs.
- The impact of SMOTE and undersampling techniques on CVDP models are statistically and practically significant regarding the considered performance metrics.
6. Discussion
7. Threats to Validity
7.1. Construct and Internal Validity
7.2. External Validity
8. Related Work
8.1. Cross-Version Defect Prediction
8.2. CVDP Considering Distribution Differences
9. Conclusions
- When using the most-used classifiers from the defect prediction literature, up to 50% of the chronological-defect datasets exhibit drift-prone behavior.
- By up to 31%, the CD from the datasets is eliminated by the class-rebalancing procedures, which also enhance CVDP-model prediction performance.
- The class-rebalancing techniques exhibit noteworthy significant and practical performance improvement when considering recall and Gmean. Additionally, the models yield better performance enhancement when employing the SMOTE and undersampling techniques.
- The class-rebalancing techniques are beneficial when the practitioners wish to increase the ability of correctly classifying the CV defective modules.
- We suggest adding class-rebalancing techniques in the drift elimination process if practitioners wish to alleviate CD from chronological-defect datasets.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gangwar, A.K.; Kumar, S.; Mishra, A. A Paired Learner-Based Approach for Concept Drift Detection and Adaptation in Software Defect Prediction. Appl. Sci. 2021, 11, 6663. [Google Scholar] [CrossRef]
- Malialis, K.; Panayiotou, C.G.; Polycarpou, M.M. Nonstationary data stream classification with online active learning and siamese neural networks. Neurocomputing 2022, 512, 235–252. [Google Scholar] [CrossRef]
- Pandit, M.; Gupta, D.; Anand, D.; Goyal, N.; Aljahdali, H.M.; Mansilla, A.O.; Kadry, S.; Kumar, A. Towards Design and Feasibility Analysis of DePaaS: AI Based Global Unified Software Defect Prediction Framework. Appl. Sci. 2022, 12, 493. [Google Scholar] [CrossRef]
- Pachouly, J.; Ahirrao, S.; Kotecha, K.; Selvachandran, G.; Abraham, A. A systematic literature review on software defect prediction using artificial intelligence: Datasets, Data Validation Methods, Approaches, and Tools. Eng. Appl. Artif. Intell. 2022, 111, 104773. [Google Scholar] [CrossRef]
- Alazba, A.; Aljamaan, H. Software Defect Prediction Using Stacking Generalization of Optimized Tree-Based Ensembles. Appl. Sci. 2022, 12, 4577. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhu, Y.; Yu, Q.; Chen, X. Cross-Project Defect Prediction Considering Multiple Data Distribution Simultaneously. Symmetry 2022, 14, 401. [Google Scholar] [CrossRef]
- Jorayeva, M.; Akbulut, A.; Catal, C.; Mishra, A. Deep Learning-Based Defect Prediction for Mobile Applications. Sensors 2022, 22, 4734. [Google Scholar] [CrossRef]
- Pan, C.; Lu, M.; Xu, B.; Gao, H. An Improved CNN Model for Within-Project Software Defect Prediction. Appl. Sci. 2019, 9, 2138. [Google Scholar] [CrossRef] [Green Version]
- Kabir, M.A.; Keung, J.; Turhan, B.; Bennin, K.E. Inter-release defect prediction with feature selection using temporal chunk-based learning: An empirical study. Appl. Soft Comput. 2021, 113, 107870. [Google Scholar] [CrossRef]
- Luo, H.; Dai, H.; Peng, W.; Hu, W.; Li, F. An Empirical Study of Training Data Selection Methods for Ranking-Oriented Cross-Project Defect Prediction. Sensors 2021, 21, 7535. [Google Scholar] [CrossRef]
- Hosseini, S.; Turhan, B.; Gunarathna, D. A Systematic Literature Review and Meta-Analysis on Cross Project Defect Prediction. IEEE Trans. Softw. Eng. 2019, 45, 111–147. [Google Scholar] [CrossRef] [Green Version]
- Porto, F.; Minku, L.; Mendes, E.; Simao, A. A systematic study of cross-project defect prediction with meta-learning. arXiv 2018, arXiv:1802.06025. [Google Scholar]
- Lokan, C.; Mendes, E. Investigating the use of moving windows to improve software effort prediction: A replicated study. Empir. Softw. Eng. 2017, 22, 716–767. [Google Scholar] [CrossRef]
- Minku, L.; Yao, X. Which models of the past are relevant to the present? A software effort estimation approach to exploiting useful past models. Autom. Softw. Eng. 2017, 24, 499–542. [Google Scholar] [CrossRef] [Green Version]
- Shukla, S.; Radhakrishnan, T.; Muthukumaran, K.; Neti, L.B.M. Multi-objective cross-version defect prediction. Soft Comput. 2018, 22, 1959–1980. [Google Scholar] [CrossRef]
- Bennin, K.E.; Keung, J.W.; Monden, A. On the relative value of data resampling approaches for software defect prediction. Empir. Softw. Eng. 2019, 24, 602–636. [Google Scholar] [CrossRef]
- Tantithamthavorn, C.; Hassan, A.E.; Matsumoto, K. The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models. IEEE Trans. Softw. Eng. 2020, 46, 1200–1219. [Google Scholar] [CrossRef] [Green Version]
- Mahdi, O.A.; Pardede, E.; Ali, N.; Cao, J. Fast Reaction to Sudden Concept Drift in the Absence of Class Labels. Appl. Sci. 2020, 10, 606. [Google Scholar] [CrossRef] [Green Version]
- Ditzler, G.; Roveri, M.; Alippi, C.; Polikar, R. Learning in Nonstationary Environments: A Survey. IEEE Comput. Intell. Mag. 2015, 10, 12–25. [Google Scholar] [CrossRef]
- de Lima Cabral, D.R.; de Barros, R.S.M. Concept drift detection based on Fisher’s Exact test. Inf. Sci. 2018, 442–443, 220–234. [Google Scholar] [CrossRef]
- Webb, G.I.; Hyde, R.; Cao, H.; Nguyen, H.L.; Petitjean, F. Characterizing concept drift. Data Min. Knowl. Discov. 2016, 30, 964–994. [Google Scholar] [CrossRef]
- Lu, J.; Liu, A.; Dong, F.; Gu, F.; Gama, J.; Zhang, G. Learning under Concept Drift: A Review. IEEE Trans. Knowl. Data Eng. 2019, 31, 2346–2363. [Google Scholar] [CrossRef] [Green Version]
- Dong, F.; Lu, J.; Li, K.; Zhang, G. Concept drift region identification via competence-based discrepancy distribution estimation. In Proceedings of the 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, 24–26 November 2017; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
- Rehman, A.U.; Belhaouari, S.B.; Ijaz, M.; Bermak, A.; Hamdi, M. Multi-Classifier Tree With Transient Features for Drift Compensation in Electronic Nose. IEEE Sens. J. 2021, 21, 6564–6574. [Google Scholar] [CrossRef]
- Baena-Garcıa, M.; del Campo-Ávila, J.; Fidalgo, R.; Bifet, A.; Gavalda, R.; Morales-Bueno, R. Early drift detection method. In Proceedings of the Fourth International Workshop on Knowledge Discovery from Data Streams, Xi’an, China, 14–16 August 2006; Volume 6, pp. 77–86. [Google Scholar]
- Bifet, A.; Gavalda, R. Learning from time-changing data with adaptive windowing. In Proceedings of the 2007 SIAM International Conference on Data Mining, Minneapolis, MN, USA, 26–28 April 2007; pp. 443–448. [Google Scholar]
- Pesaranghader, A.; Viktor, H.L. Fast Hoeffding Drift Detection Method for Evolving Data Streams. In Machine Learning and Knowledge Discovery in Databases; Frasconi, P., Landwehr, N., Manco, G., Vreeken, J., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 96–111. [Google Scholar]
- Gama, J.; Zliobaite, I.; Bifet, A.; Pechenizkiy, M.; Bouchachia, A. A Survey on Concept Drift Adaptation. ACM Comput. Surv. 2014, 46, 1–37. [Google Scholar] [CrossRef]
- Klinkenberg, R.; Joachims, T. Detecting Concept Drift with Support Vector Machines. In Proceedings of the Seventeenth International Conference on Machine Learning, San Francisco, CA, USA, 29 June–2 July 2000; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2000; pp. 487–494. [Google Scholar]
- Lokan, C.; Mendes, E. Investigating the use of duration-based moving windows to improve software effort prediction: A replicated study. Inf. Softw. Technol. 2014, 56, 1063–1075. [Google Scholar] [CrossRef] [Green Version]
- Amasaki, S. On Applicability of Cross-Project Defect Prediction Method for Multi-Versions Projects. In Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering, Toronto, ON, Canada, 8 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 93–96. [Google Scholar] [CrossRef]
- Amasaki, S. Cross-Version Defect Prediction Using Cross-Project Defect Prediction Approaches: Does It Work? In Proceedings of the 14th International Conference on Predictive Models and Data Analytics in Software Engineering, Oulu, Finland, 10 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 32–41. [Google Scholar] [CrossRef]
- Amasaki, S. Cross-version defect prediction: Use historical data, cross-project data, or both? Empir. Softw. Eng. 2020, 25, 1573–1595. [Google Scholar] [CrossRef]
- Lyu, Y.; Li, H.; Sayagh, M.; Jiang, Z.M.J.; Hassan, A.E. An Empirical Study of the Impact of Data Splitting Decisions on the Performance of AIOps Solutions. ACM Trans. Softw. Eng. Methodol. 2021, 30, 1–38. [Google Scholar] [CrossRef]
- Madeyski, L.; Jureczko, M. Which process metrics can significantly improve defect prediction models? An empirical study. Softw. Qual. J. 2015, 23, 393–422. [Google Scholar] [CrossRef]
- Xu, Z.; Li, S.; Luo, X.; Liu, J.; Zhang, T.; Tang, Y.; Xu, J.; Yuan, P.; Keung, J. TSTSS: A two-stage training subset selection framework for cross version defect prediction. J. Syst. Softw. 2019, 154, 59–78. [Google Scholar] [CrossRef]
- Xu, Z.; Liu, J.; Luo, X.; Zhang, T. Cross-version defect prediction via hybrid active learning with kernel principal component analysis. In Proceedings of the 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), Campobasso, Italy, 20–23 March 2018; pp. 209–220. [Google Scholar] [CrossRef]
- Xu, Z.; Li, S.; Tang, Y.; Luo, X.; Zhang, T.; Liu, J.; Xu, J. Cross Version Defect Prediction with Representative Data via Sparse Subset Selection. In Proceedings of the 26th Conference on Program Comprehension, Gothenburg, Sweden, 27–28 May 2018; ACM: New York, NY, USA, 2018; pp. 132–143. [Google Scholar] [CrossRef]
- Kabir, M.A.; Keung, J.W.; Bennin, K.E.; Zhang, M. Assessing the Significant Impact of Concept Drift in Software Defect Prediction. In Proceedings of the 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Milwaukee, WI, USA, 15–19 July 2019; Volume 1, pp. 53–58. [Google Scholar] [CrossRef]
- The SEACRAFT Repository of Empirical Software Engineering Data. 2017. Available online: https://zenodo.org/communities/seacraft (accessed on 1 January 2022).
- The Promise Repository of Empirical Software Engineering Data. 2005. Available online: http://promise.site.uottawa.ca/SERepository (accessed on 1 January 2022).
- Bangash, A.A.; Sahar, H.; Hindle, A.; Ali, K. On the time-based conclusion stability of cross-project defect prediction models. Empir. Softw. Eng. Int. J. 2020, 25, 5047–5083. [Google Scholar] [CrossRef]
- Feng, S.; Keung, J.; Yu, X.; Xiao, Y.; Bennin, K.E.; Kabir, M.A.; Zhang, M. COSTE: Complexity-based OverSampling TEchnique to alleviate the class imbalance problem in software defect prediction. Inf. Softw. Technol. 2021, 129, 106432. [Google Scholar] [CrossRef]
- Kuhn, M.; Wing, J.; Weston, S.; Williams, A.; Keefer, C.; Engelhardt, A.; Cooper, T.; Mayer, Z.; Kenkel, B.; Team, R.C.; et al. Package ‘caret’. R J. 2020. Available online: http://free-cd.stat.unipd.it/web/packages/caret/caret.pdf (accessed on 1 January 2022).
- Torgo, L.; Torgo, M.L. Package ‘DMwR’; Comprehensive R Archive Network: Vienna, Austria, 2013. [Google Scholar]
- Bennin, K.E.; Keung, J.; Phannachitta, P.; Monden, A.; Mensah, S. MAHAKIL: Diversity Based Oversampling Approach to Alleviate the Class Imbalance Issue in Software Defect Prediction. IEEE Trans. Softw. Eng. 2018, 44, 534–550. [Google Scholar] [CrossRef]
- Menzies, T.; Greenwald, J.; Frank, A. Data Mining Static Code Attributes to Learn Defect Predictors. IEEE Trans. Softw. Eng. 2007, 33, 2–13. [Google Scholar] [CrossRef]
- He, Z.; Shu, F.; Yang, Y.; Li, M.; Wang, Q. An investigation on the feasibility of cross-project defect prediction. Autom. Softw. Eng. 2012, 19, 167–199. [Google Scholar] [CrossRef]
- Menzies, T.; Dekhtyar, A.; Distefano, J.; Greenwald, J. Problems with Precision: A Response to “Comments on ’Data Mining Static Code Attributes to Learn Defect Predictors”. IEEE Trans. Softw. Eng. 2007, 33, 637–640. [Google Scholar] [CrossRef]
- Kocaguneli, E.; Menzies, T.; Keung, J.; Cok, D.; Madachy, R. Active learning and effort estimation: Finding the essential content of software effort estimation data. IEEE Trans. Softw. Eng. 2013, 39, 1040–1053. [Google Scholar] [CrossRef]
- Kitchenham, B.; Madeyski, L.; Budgen, D.; Keung, J.; Brereton, P.; Charters, S.; Gibbs, S.; Pohthong, A. Robust Statistical Methods for Empirical Software Engineering. Empir. Softw. Eng. 2017, 22, 579–630. [Google Scholar] [CrossRef] [Green Version]
- Romano, J.; Kromrey, J.D.; Coraggio, J.; Skowronek, J. Appropriate statistics for ordinal level data: Should we really be using t-test and Cohen’sd for evaluating group differences on the NSSE and other surveys. In Proceedings of the Annual Meeting of the Florida Association of Institutional Research, Cocoa Beach, FL, USA, 1–3 February 2006; pp. 1–33. [Google Scholar]
- Gama, J. Knowledge Discovery from Data Streams; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
- Kabir, M.A.; Keung, J.W.; Bennin, K.E.; Zhang, M. A Drift Propensity Detection Technique to Improve the Performance for Cross-Version Software Defect Prediction. In Proceedings of the 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 13–17 July 2020; pp. 882–891. [Google Scholar] [CrossRef]
- Turhan, B. On the dataset shift problem in software engineering prediction models. Empir. Softw. Eng. 2012, 17, 62–74. [Google Scholar] [CrossRef]
- Haug, J.; Kasneci, G. Learning Parameter Distributions to Detect Concept Drift in Data Streams. arXiv 2020, arXiv:2010.09388. [Google Scholar]
- Lin, Q.; Hsieh, K.; Dang, Y.; Zhang, H.; Sui, K.; Xu, Y.; Lou, J.G.; Li, C.; Wu, Y.; Yao, R.; et al. Predicting node failure in cloud service systems. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Lake Buena Vista, FL, USA, 4–9 November 2018; pp. 480–490. [Google Scholar]
- Bennin, K.E.; Toda, K.; Kamei, Y.; Keung, J.; Monden, A.; Ubayashi, N. Empirical Evaluation of Cross-Release Effort-Aware Defect Prediction Models. In Proceedings of the 2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), Vienna, Austria, 1–3 August 2016; pp. 214–221. [Google Scholar] [CrossRef]
- Yang, X.; Wen, W. Ridge and Lasso Regression Models for Cross-Version Defect Prediction. IEEE Trans. Reliab. 2018, 67, 885–896. [Google Scholar] [CrossRef]
- Fan, Y.; Xia, X.; Alencar da Costa, D.; Lo, D.; Hassan, A.E.; Li, S. The Impact of Changes Mislabeled by SZZ on Just-in-Time Defect Prediction. IEEE Trans. Softw. Eng. 2019, 47, 1559–1586. [Google Scholar] [CrossRef]
- Herbold, S.; Trautsch, A.; Grabowski, J. A Comparative Study to Benchmark Cross-Project Defect Prediction Approaches. IEEE Trans. Softw. Eng. 2018, 44, 811–833. [Google Scholar] [CrossRef]
- Turhan, B.; Tosun Mısırlı, A.; Bener, A. Empirical evaluation of the effects of mixed project data on learning defect predictors. Inf. Softw. Technol. 2013, 55, 1101–1118. [Google Scholar] [CrossRef]
- Ekanayake, J.; Tappolet, J.; Gall, H.C.; Bernstein, A. Tracking concept drift of software projects using defect prediction quality. In Proceedings of the 2009 6th IEEE International Working Conference on Mining Software Repositories, Vancouver, BC, Canada, 16–17 May 2009; pp. 51–60. [Google Scholar] [CrossRef]
- Ekanayake, J.; Tappolet, J.; Gall, H.C.; Bernstein, A. Time variance and defect prediction in software projects. Empir. Softw. Eng. 2012, 17, 348–389. [Google Scholar] [CrossRef]
- Bennin, K.E.; bin Ali, N.; Börstler, J.; Yu, X. Revisiting the Impact of Concept Drift on Just-in-Time Quality Assurance. In Proceedings of the 2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS), Macau, China, 11–14 December 2020. [Google Scholar] [CrossRef]
Project | Version | Release Date | #Modules | #Defects | Defects (%) |
---|---|---|---|---|---|
ant | ant-1.3 | 12 August 2003 | 125 | 20 | 15.90% |
ant-1.4 | 12 August 2003 | 178 | 40 | 22.50% | |
ant-1.5 | 12 August 2003 | 293 | 32 | 10.90% | |
ant-1.6 | 18 December 2003 | 351 | 92 | 26.10% | |
ant-1.7 | 13 December 2006 | 745 | 166 | 22.30% | |
camel | camel-1.0 | 19 January 2009 | 339 | 13 | 3.80% |
camel-1.2 | 19 January 2009 | 608 | 216 | 35.50% | |
camel-1.4 | 19 January 2009 | 872 | 145 | 16.60% | |
camel-1.6 | 17 February 2009 | 965 | 188 | 19.50% | |
poi | poi-1.5 | 24 June 2007 | 1988 | 141 | 59.50% |
poi-2.0 | 24 June 2007 | 9277 | 37 | 11.80% | |
poi-2.5 | 24 June 2007 | 1988 | 248 | 64.40% | |
poi-3.0 | 24 June 2007 | 125 | 281 | 63.60% | |
log4j | log4j-1.0 | 08 January 2001 | 135 | 34 | 25.20% |
log4j-1.1 | 20 May 2001 | 109 | 37 | 33.90% | |
log4j-1.2 | 10 May 2002 | 205 | 189 | 92.20% | |
xerces | xerces-init | 08 November 1999 | 162 | 77 | 47.50% |
xerces-1.2 | 23 June 2000 | 440 | 71 | 16.10% | |
xerces-1.3 | 29 November 2000 | 453 | 69 | 15.20% | |
xerrces-1.4 | 26 January 2001 | 588 | 437 | 74.30% | |
velocity | velocity-1.4 | 01 December 2006 | 196 | 147 | 75.00% |
velocity-1.5 | 06 March 2007 | 214 | 142 | 66.40% | |
velocity-1.6 | 01 December 2008 | 229 | 78 | 34.10% | |
ivy | ivy-1.1 | 13 June 2005 | 111 | 63 | 56.80% |
ivy-1.4 | 09 November 2006 | 241 | 16 | 6.60% | |
ivy-2.0 | 18 January 2009 | 352 | 40 | 11.40% | |
lucene | lucene-2.0 | 26 May 2006 | 195 | 91 | 46.70% |
lucene-2.2 | 17 June 2007 | 247 | 144 | 58.30% | |
lucene-2.4 | 08 October 2008 | 340 | 203 | 59.70% | |
synapse | synapse-1.0 | 13 June 2007 | 157 | 16 | 10.20% |
synapse-1.1 | 12 November 2007 | 222 | 60 | 27.00% | |
synapse-1.2 | 09 June 2008 | 256 | 86 | 33.60% | |
xalan | xalan-2.4 | 28 August 2002 | 723 | 110 | 15.20% |
xalan-2.5 | 10 April 2003 | 803 | 387 | 48.20% | |
xalan-2.6 | 27 February 2004 | 885 | 411 | 46.40% | |
xalan-2.7 | 06 August 2005 | 909 | 898 | 98.80% |
Instances | Window at Time t | Window at Time |
---|---|---|
# of correct | () | () |
# of incorrect | () | () |
Total | () | () |
Metric | Description |
---|---|
WMC | Weighted methods per class |
DIT | Depth of inheritance tree |
NOC | Number of children |
CBO | Coupling between objects classes |
RFC | Response for a class |
LCOM | Lack of cohesion in methods |
Ca | Afferent coupling |
Ce | Effective coupling |
NPM | Number of public methods |
LCOM3 | Lack of cohesion in methods, different from LCOM |
LOC | Number of lines of code |
DAM | Data access metric |
MOA | Measure of aggregation |
MFA | Measure of functional abstraction |
CAM | Cohesion among methods of class |
IC | Inheritance coupling |
CBM | Coupling between methods |
AMC | Average method complexity |
MaxCC | Maximum values of CC methods in a investigated class |
Avg(CC) | Arithmetic mean of CC methods in a investigated class |
BUG | Indicate the number of fault in a class |
Predicted Positive | Predictive Negative | |
---|---|---|
Actual Positive | TP | FN |
Actual Negative | FP | TN |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kabir, M.A.; Begum, S.; Ahmed, M.U.; Rehman, A.U. CODE: A Moving-Window-Based Framework for Detecting Concept Drift in Software Defect Prediction. Symmetry 2022, 14, 2508. https://doi.org/10.3390/sym14122508
Kabir MA, Begum S, Ahmed MU, Rehman AU. CODE: A Moving-Window-Based Framework for Detecting Concept Drift in Software Defect Prediction. Symmetry. 2022; 14(12):2508. https://doi.org/10.3390/sym14122508
Chicago/Turabian StyleKabir, Md Alamgir, Shahina Begum, Mobyen Uddin Ahmed, and Atiq Ur Rehman. 2022. "CODE: A Moving-Window-Based Framework for Detecting Concept Drift in Software Defect Prediction" Symmetry 14, no. 12: 2508. https://doi.org/10.3390/sym14122508
APA StyleKabir, M. A., Begum, S., Ahmed, M. U., & Rehman, A. U. (2022). CODE: A Moving-Window-Based Framework for Detecting Concept Drift in Software Defect Prediction. Symmetry, 14(12), 2508. https://doi.org/10.3390/sym14122508