An Improved Confounding Effect Model for Software Defect Prediction
Abstract
:1. Introduction
- We noticed the new confounder in software defect prediction and proposed an improved confounding effect model.
- We used half-sibling regression to quantify the confounding variable.
- We experimentally analyzed the effect extent of the confounding variable, then verified the effectiveness of the proposed model for prediction.
2. Related Works
3. Confounding Effect Model for Software Defect Prediction
3.1. Traditional Confounding Effect Model
3.2. Improved Confounding Effect Model
3.3. Data Analysis Method
3.4. Predictive Model with Controlling Confounder
Algorithm 1: Predictive model. |
Input: Training data (X represents software metrics; Y represents defect-proneness; C and C present confounding noise) |
Output: Prediction model |
1: Use logistic regression to fit Y by X, obtain |
2: Predict Y by X, obtain |
3: Calculate C based on half-sibling regression, obtain |
4: Transform C to C. If Y==1, C equals to d*C; if Y==0, C equals to n*C |
5: Determine values of d and n using the grid search method, which could get f(X, C) fitting Y well. |
6: Establish the prediction model by logistic regression, obtain |
4. Experiments and Results
4.1. Datasets
4.2. Presence and Extent of Confounding Effect
4.3. Experiments for the Proposed Prediction Model
4.3.1. Experiments Set
- LR: A two-step logistic regression is widely used in software defect prediction content. First, for each code metric, build a univariate logistic regression against defect proneness; second, metrics with significant correlations (p-value < 0.05) are used to establish a multivariate logistic regression to predict the defects.
- LCERM+LR: Before applying the two-step LR model mentioned above, the LCERM, a method which could remove the confounding effects of the size metric, is applied. The LCERM uses linear regression to fit the size metric and one other metric, and this linear relationship between them is seen as the confounding effect. The removal of the confounding effect is achieved by subtracting the fitted value from the metric value. More descriptions can be seen in [16]. This paper uses the LOC(line of code) metric as the size metric.
- SVM: To improve the predictive ability of the SVM model, we oversample the defect instances, standardize the original data, and perform principal component analysis transformation. The first five components are applied to the SVM model.
- NN: To improve the predictive ability of the NN model, we oversample the defect instances, standardize the original data, and perform principal component analysis transformation. The first five components are applied to the NN model.
4.3.2. Parameters Selection
4.3.3. Presents of Prediction Result
- (1)
- Compared with all baseline models, our proposed HSR-LR model performs best under the F1 score, and it does not obtain the best performance in precision and recall rate. Compared with the best baseline model, the F1 score of HSR-LR increases 1% in CM, 1.1% in JM1, 0.3% in KC1, 3.9% in MC2, 2.9% in KC3 and 3.4% in KC5, with an average of 2.2%.
- (2)
- HSR-LR performs better than LR, which verifies that the confounding effects mentioned in this paper affect the LR prediction ability. With the help of HSR, we can quantify the confounding effect. Under controlling the confounding effect, LR significantly improves.
- (3)
- Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show that HSR-LR and LR have similar precision rates, and each has wins and losses but not much difference. However, HSR-LR has better recall performance than that of LR. These points are why HSR-LR has better F1 values; that is to say, controlling the confounder can effectively increase the recall rate of LR, thereby increasing the F1 value of the LR model.
- (4)
- HSR-LR performs better than CM-LR, indicating that our proposed confounding effect model is more suitable for software defect prediction content than the traditional confounding effect model. CM-LR performs similarly to lR, indicating that the existing confounding removal methods are unsuitable for software defect prediction, which further illustrates the necessity of this paper. Based on the proposed model, HSR-LR could help solve this issue.
- (5)
- Compared to SVM and NN, our model also achieves better performance. Both SVM and NN are commonly used classifiers in the field of software defect prediction. On six projects, our model outperforms SVM by an average of 3.7% and outperforms NN by an average of 2.5% under F1 scores.
5. Internal and External Validity
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jiang, Y.; Li, X. Broadband cancellation method in an adaptive co-site interference cancellation system. Int. J. Electron. 2022, 109, 854–874. [Google Scholar] [CrossRef]
- Lei, W.; Hui, Z.; Xiang, L.; Zelin, Z.; Xu-Hui, X.; Evans, S. Optimal remanufacturing service resource allocation for generalized growth of retired mechanical products: Maximizing matching efficiency. IEEE Access 2021, 9, 89655–89674. [Google Scholar] [CrossRef]
- Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth estimation method for monocular camera defocus images in microscopic scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
- Wahono, R.S. A systematic literature review of software defect prediction. J. Softw. Eng. 2015, 1, 1–16. [Google Scholar]
- Kitchenham, B.; Pfleeger, S.L. Software quality: The elusive target [special issues section]. IEEE Softw. 1996, 13, 12–21. [Google Scholar] [CrossRef] [Green Version]
- Gruhn, V. Validation and verification of software process models. In European Symposium on Software Development Environments; Springer: Berlin/Heidelberg, Germany, 1991; pp. 271–286. [Google Scholar]
- Heckman, J.J. Sample selection bias as a specification error. Econom. J. Econom. Soc. 1979, 47, 153–161. [Google Scholar] [CrossRef]
- Huang, J.; Gretton, A.; Borgwardt, K.; Schölkopf, B.; Smola, A. Correcting sample selection bias by unlabeled data. Adv. Neural Inf. Process. Syst. 2006, 19, 601–608. [Google Scholar]
- Catal, C.; Diri, B. A systematic review of software fault prediction studies. Expert Syst. Appl. 2009, 36, 7346–7354. [Google Scholar] [CrossRef]
- Catal, C. Software fault prediction: A literature review and current trends. Expert Syst. Appl. 2011, 38, 4626–4636. [Google Scholar] [CrossRef]
- Radjenović, D.; Heričko, M.; Torkar, R.; Živkovixcx, A. Software fault prediction metrics: A systematic literature review. Inf. Softw. Technol. 2013, 55, 1397–1418. [Google Scholar] [CrossRef]
- Malhotra, R. A systematic review of machine learning techniques for software fault prediction. Appl. Soft Comput. 2015, 27, 504–518. [Google Scholar] [CrossRef]
- Pandey, S.K.; Mishra, R.B.; Tripathi, A.K. Machine learning based methods for software fault prediction: A survey. Expert Syst. Appl. 2021, 172, 114595. [Google Scholar] [CrossRef]
- Emam, K.E.; Benlarbi, S.; Goel, N.; Rai, S.N. The confounding effect of size on the validity of object-oriented metrics. IEEE Trans. Softw. Eng. 2001, 27, 630–650. [Google Scholar] [CrossRef]
- Zhou, Y.; Leung, H.; Xu, B. Examining the potentially confounding effect of size on the associations between object-oriented metrics and change-proneness. IEEE Trans. Softw. Eng. 2009, 35, 607–623. [Google Scholar] [CrossRef]
- Zhou, Y.; Xu, B.; Leung, H.; Chen, L. An in-depth study of the potentially confounding effect of size in fault prediction. ACM Trans. Softw. Eng. Methodol. 2014, 23, 1–51. [Google Scholar] [CrossRef]
- Helmert, M. A planning heuristic based on causal graph analysis. In Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (ICAPS 2004), Whistler, BC, Canada, 3–7 June 2004. [Google Scholar]
- Kazman, R.; Stoddard, R.; Danks, D.; Cai, Y. Causal modeling, discovery, & inference for software engineering. In Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), Buenos Aires, Argentina, 20–28 May 2017; pp. 172–174. [Google Scholar]
- Schölkopf, B.; Hogg, D.W.; Wang, D.; Foreman-Mackey, D.; Janzing, D.; Simon-Gabriel, C.J.; Peters, J. Modeling confounding by half-sibling regression. Proc. Natl. Acad. Sci. USA 2016, 113, 7391–7398. [Google Scholar] [CrossRef] [Green Version]
- Pachouly, J.; Ahirrao, S.; Kotecha, K.; Selvachandran, G.; Abraham, A. A systematic literature review on software defect prediction using artificial intelligence: Datasets, data validation methods, approaches, and tools. Eng. Appl. Artif. Intell. 2022, 111, 104773. [Google Scholar] [CrossRef]
- Jorayeva, M.; Akbulut, A.; Catal, C.; Mishra, A. Machine learning-based software defect prediction for mobile applications: A systematic literature review. Sensors 2022, 22, 2551. [Google Scholar] [CrossRef]
- Okutan, A.; Yıldız, O.T. Software defect prediction using bayesian networks. Empir. Softw. Eng. 2014, 19, 154–181. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Yao, X. Using class imbalance learning for software defect prediction. IEEE Trans. Reliab. 2013, 62, 434–443. [Google Scholar] [CrossRef] [Green Version]
- Emam, K.E.; Melo, W.; Machado, J.C. The prediction of faulty classes using object-oriented design metrics. J. Syst. Softw. 2001, 56, 63–75. [Google Scholar] [CrossRef] [Green Version]
- Basili, V.R.; Briand, L.C.; Melo, W.L. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Softw. Eng. 1996, 22, 751–761. [Google Scholar] [CrossRef] [Green Version]
- Olague, H.M.; Etzkorn, L.H.; Gholston, S.; Quattlebaum, S. Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. IEEE Trans. Softw. Eng. 2007, 33, 402–419. [Google Scholar] [CrossRef]
- Yu, C.; Ding, Z.; Chen, X. Hope: Software defect prediction model construction method via homomorphic encryption. IEEE Access 2021, 9, 69405–69417. [Google Scholar] [CrossRef]
- Li, J.; He, P.; Zhu, J.; Lyu, M.R. Software defect prediction via convolutional neural network. In Proceedings of the 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS), Prague, Czech Republic, 25–29 July 2017; pp. 318–328. [Google Scholar]
- Goyal, S. Effective software defect prediction using support vector machines (svms). Int. J. Syst. Assur. Eng. Manag. 2022, 13, 681–696. [Google Scholar] [CrossRef]
- He, C.; Xing, J.; Zhu, R.; Li, J.; Yang, Q.; Xie, L. A new model for software defect prediction using particle swarm optimization and support vector machine. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; pp. 4106–4110. [Google Scholar]
- Zhu, K.; Zhang, N.; Ying, S.; Wang, X. Within-project and cross-project software defect prediction based on improved transfer naive bayes algorithm. Comput. Mater. Contin. 2020, 63, 891–910. [Google Scholar]
- Goyal, S. Handling class-imbalance with knn (neighbourhood) under-sampling for software defect prediction. Artif. Intell. Rev. 2022, 55, 2023–2064. [Google Scholar] [CrossRef]
- Goyal, J.; Sinha, R.R. Software defect-based prediction using logistic regression: Review and challenges. In Second International Conference on Sustainable Technologies for Computational Intelligence: Proceedings of ICTSCI 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 233–248. [Google Scholar]
- Hall, T.; Beecham, S.; Bowes, D.; Gray, D.; Counsell, S. A systematic literature review on fault prediction performance in software engineering. IEEE Trans. Softw. Eng. 2011, 38, 1276–1304. [Google Scholar] [CrossRef]
- Shatnawi, R.; Li, W. The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process. J. Syst. Softw. 2008, 81, 1868–1882. [Google Scholar] [CrossRef]
- Tessema, H.D.; Abebe, S.L. Enhancing just-in-time defect prediction using change request-based metrics. In Proceedings of the 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, 9–12 March 2021; pp. 511–515. [Google Scholar]
- Eivazpour, Z.; Keyvanpour, M.R. Cssg: A cost-sensitive stacked generalization approach for software defect prediction. Softw. Test. Verif. Reliab. 2021, 31, e1761. [Google Scholar] [CrossRef]
- Bahaweres, R.B.; Suroso, A.I.; Hutomo, A.W.; Solihin, I.P.; Hermadi, I.; Arkeman, Y. Tackling feature selection problems with genetic algorithms in software defect prediction for optimization. In Proceedings of the 2020 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Jakarta, Indonesia, 19–20 November 2020; pp. 64–69. [Google Scholar]
- Cover, T.M. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
- Quinlan, J.R. C4. 5: Programs for Machine Learning; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Hall, M.A. Correlation-Based Feature Selection of Discrete and Numeric Class Machine Learning; University of Waikato, Department of Computer Science: Hamilton, New Zealand, 2000. [Google Scholar]
- Dash, M.; Liu, H. Consistency-based search in feature selection. Artif. Intell. 2003, 151, 155–176. [Google Scholar] [CrossRef] [Green Version]
- Bro, R.; Smilde, A.K. Principal component analysis. Anal. Methods 2014, 6, 2812–2831. [Google Scholar] [CrossRef] [Green Version]
- Yang, Z.; Liu, T. Causally denoise word embeddings using half-sibling regression. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 9426–9433. [Google Scholar]
- Schlesselman, J.J. Case-Control Studies: Design, Conduct, Analysis; Oxford University Press: Oxford, UK, 1982; Volume 2. [Google Scholar]
- Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013; Volume 398. [Google Scholar]
- Sangi-Haghpeykar, H.; Poindexter, A.N., III. Epidemiology of endometriosis among parous women. Obstet. Gynecol. 1995, 85, 983–992. [Google Scholar] [CrossRef] [PubMed]
- Shepperd, M.; Song, Q.; Sun, Z.; Mair, C. Data quality: Some comments on the nasa software defect datasets. IEEE Trans. Softw. Eng. 2013, 39, 1208–1215. [Google Scholar] [CrossRef] [Green Version]
No. | Title | Topic | Study |
---|---|---|---|
1 | The confounding effect of size on the validity of object-oriented metrics | investigate, identify and examine the confounding effect of size in software defect prediction | Emam et al. [14] IEEE Transactions on Software Engineering 2001 |
2 | Examining the Potentially Confounding Effect of size on the Associations between Object-Oriented Metrics and Change-Proneness | examine the potentially confounding effects of three size metrics on the associations between OO metrics and defects. | Zhou et al. [15] IEEE Transactions on Software Engineering 2009 |
3 | An in-depth study of the potentially confounding effect of size in fault prediction | Systematically analyze the extent of confounding effect of seven size metrics; propose a linear regression-based method to remove the confounding effect of size | Zhou et al. [16] ACM Transactions on Software Engineering and Methodology 2014 |
Dataset | Instances | Metrics | Defect | Non-Defect | Defect Rate |
---|---|---|---|---|---|
CM1 | 344 | 37 | 42 | 302 | 12% |
JM1 | 9593 | 21 | 1759 | 7834 | 18% |
KC1 | 2096 | 21 | 325 | 1771 | 16% |
MC2 | 127 | 39 | 44 | 83 | 35% |
PC3 | 1125 | 37 | 140 | 985 | 12% |
PC5 | 17,001 | 36 | 503 | 16,498 | 3% |
No. | MetricName | Description |
---|---|---|
1 | LOC_BLANK | number of blank lines |
2 | BRANCH_COUNT | number of branches |
3 | CALL_PAIRS | pairs of call |
4 | LOC_CODE_AND_COMMENT | number of lines of code and comments |
5 | LOC_COMMENTS | nomber ofcomment lines |
6 | CONDITION_COUNT | number of conditional statement |
7 | CYCLOMATIC_COMPLEXITY | cyclomatic complexity |
8 | CYCLOMATIC_DENSITY | circle density |
9 | DECISION_COUNT | number of decisions |
10 | DECISION_DENSITY | decision complexity |
11 | DESIGN_COMPLEXITY | design complexity |
12 | DESIGN_DENSITY | design density |
13 | EDGE_COUNT | number of boundary |
14 | ESSENTIAL_COMPLEXITY | Intrinsic complexity |
15 | ESSENTIAL_DENSITY | Intrinsic density |
16 | LOC_EXECUTABLE | number of executable lines |
17 | PARAMETER_COUNT | number of parameters |
18 | GLOBAL_DATA_COMPLEXITY | global data complexity |
19 | GLOBAL_DATA_DENSITY | global data density |
20 | HALSTEAD_CONTENT | content metric |
21 | HALSTEAD_DIFFICULTY | complexity |
22 | HALSTEAD_EFFORT | programming efficiency |
23 | HALSTEAD_ERROR_EST | misprediction |
24 | HALSTEAD_LENGTH | program length |
25 | HALSTEAD_LEVEL | programming language class |
26 | HALSTEAD_PROG_TIME | how long wrote the program |
27 | HALSTEAD_VOLUME | program capacity |
28 | MAINTENANCE_SEVERITY | maintenance severity |
29 | MODIFIED_CONDITION_COUNT | modify the number of conditional statements |
30 | MULTIPLE_CONDITION_COUNT | number of conditional statements |
31 | NODE_COUNT | number of nodes |
32 | NORMALIZED_CYLOMATIC_ COMPLEXITY | canonical cyclomatic complexity |
33 | NUM_OPERANDS | number of operands |
34 | NUM_OPERATORS | number of operators |
35 | NUM_UNIQUE_OPERANDS | number of special operands |
36 | NUM_UNIQUE_OPERATORS | number of special operators |
37 | NUMBER_OF_LINES | number of rows |
38 | PERCENT_COMMENTS | percent of comments |
39 | LOC_TOTAL | total lines of code |
No. of Metrics | CM1 | JM1 | KC1 | MC2 | PC3 | PC5 |
---|---|---|---|---|---|---|
1 | 33.0% | 100.0% | 88.2% | 100.0% | 211.3% | 48.6% |
2 | 100.0% | 6.3% | 90.3% | 100.0% | 53.7% | 134.1% |
3 | 32.7% | #N/A | #N/A | 96.3% | 3.8% | 89.2% |
4 | 100.0% | 100.0% | 149.4% | 100.0% | 99.7% | 100.0% |
5 | 99.7% | 100.0% | 63.0% | 100.0% | 43.6% | 99.9% |
6 | 99.9% | #N/A | #N/A | 100.0% | 95.2% | 133.1% |
7 | 100.0% | 18.6% | 90.6% | 100.0% | 55.0% | 119.9% |
8 | % | #N/A | #N/A | 73,222.6% | 42.6% | 73.7% |
9 | 99.9% | #N/A | #N/A | 100.0% | 274.5% | 124.7% |
10 | % | #N/A | #N/A | 70.0% | 100.0% | #N/A |
11 | 27.8% | 114.0% | 85.7% | 100.0% | 61.0% | 134.0% |
12 | 73.3% | #N/A | #N/A | 295.7% | 70.4% | 1165.0% |
13 | 100.0% | #N/A | #N/A | 100.0% | 109.3% | 76.5% |
14 | 100.0% | 35.6% | 83.1% | 100.0% | 12.3% | 139.0% |
15 | % | #N/A | #N/A | 93.9% | 63.0% | 37.0% |
16 | 100.0% | 41.7% | 95.9% | 100.0% | 145.0% | 28.2% |
17 | 100.0% | #N/A | #N/A | 99.6% | 50.8% | % |
18 | #N/A | #N/A | #N/A | 100.0% | #N/A | 129.4% |
19 | #N/A | #N/A | #N/A | 94.7% | #N/A | 81,595.0% |
20 | 100.0% | 100.0% | 99.4% | 100.0% | 757.4% | 12.3% |
21 | 23.0% | 15.9% | 99.2% | 100.0% | 47.0% | 99.7% |
22 | 4.7% | 69.4% | 91.6% | 100.0% | % | 100.0% |
23 | 11.1% | 99.9% | 91.7% | 100.0% | 248.7% | 10.5% |
24 | 10.9% | 91.3% | 95.3% | 100.0% | 276.4% | 4.4% |
25 | 4160.7% | 122.7% | % | % | 10.6% | % |
26 | 4.7% | 69.4% | 91.6% | 100.0% | % | 100.0% |
27 | 11.2% | 99.9% | 91.6% | 100.0% | 250.3% | 10.6% |
28 | 11,250.5% | #N/A | #N/A | 99.8% | 66.3% | 96.5% |
29 | 99.9% | #N/A | #N/A | 100.0% | 50.9% | 137.3% |
30 | 99.9% | #N/A | #N/A | 100.0% | 23.2% | 123.6% |
31 | 100.0% | #N/A | #N/A | 100.0% | 155.6% | 67.2% |
32 | 51.5% | #N/A | #N/A | 6086.0% | 54.9% | #N/A |
33 | 11.4% | 99.9% | 94.0% | 100.0% | 184.6% | 8.1% |
34 | 11.0% | 82.0% | 94.3% | 100.0% | 471.5% | 6.8% |
35 | 100.0% | 100.0% | 97.4% | 100.0% | 205.6% | #N/A |
36 | 14.2% | 1.4% | 99.9% | 100.0% | 64.7% | 100.0% |
37 | 4.5% | #N/A | #N/A | 100.0% | 138.0% | 18.3% |
38 | 100.0% | #N/A | #N/A | 88.9% | 100.0% | 100.0% |
39 | 100.0% | 100.0% | 92.7% | 100.0% | 129.8% | 28.8% |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 50.0% | 23.1% | 0.316 |
CM-LR | 50.0% | 23.1% | 0.316 |
SVM | 31.0% | 62.7% | 0.415 |
NN | 29.6% | 65.8% | 0.408 |
HSR-LR | 40.0% | 46.2% | 0.429 * |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 55.7% | 8.3% | 0.145 |
CM-LR | 55.7% | 8.3% | 0.145 |
SVM | 29.3% | 61.7% | 0.397 |
NN | 28.6% | 64.2% | 0.395 |
HSR-LR | 28.4% | 72.0% | 0.407 * |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 58.6% | 22.0% | 0.319 |
CM-LR | 58.6% | 22.0% | 0.319 |
SVM | 29.7% | 76.1% | 0.427 |
NN | 29.0% | 80.0% | 0.424 |
HSR-LR | 30.7% | 72.3% | 0.430 * |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 48.4% | 52.1% | 0.498 |
CM | 47.5% | 50.0% | 0.484 |
SVM | 52.1% | 50.7% | 0.502 |
NN | 47.7% | 53.6% | 0.493 |
HSR-LR | 51.7% | 58.2% | 0.541 * |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 52.0% | 22.6% | 0.312 |
CM-LR | 52.0% | 22.6% | 0.312 |
SVM | 27.7% | 77.0% | 0.407 |
NN | 25.1% | 77.4% | 0.375 |
HSR-LR | 34.1% | 61.5% | 0.436 * |
Model | Precision | Recall | F1 |
---|---|---|---|
LR | 56.0% | 29.4% | 0.384 |
CM-LR | 56.1% | 29.4% | 0.384 |
SVM | 20.5% | 91.4% | 0.334 |
NN | 28.1% | 91.7% | 0.429 |
HSR-LR | 32.7% | 80.0% | 0.463 * |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yuan, Y.; Li, C.; Yang, J. An Improved Confounding Effect Model for Software Defect Prediction. Appl. Sci. 2023, 13, 3459. https://doi.org/10.3390/app13063459
Yuan Y, Li C, Yang J. An Improved Confounding Effect Model for Software Defect Prediction. Applied Sciences. 2023; 13(6):3459. https://doi.org/10.3390/app13063459
Chicago/Turabian StyleYuan, Yuyu, Chenlong Li, and Jincui Yang. 2023. "An Improved Confounding Effect Model for Software Defect Prediction" Applied Sciences 13, no. 6: 3459. https://doi.org/10.3390/app13063459